00:00:00.000 Started by upstream project "autotest-per-patch" build number 132401 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.012 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.016 The recommended git tool is: git 00:00:00.017 using credential 00000000-0000-0000-0000-000000000002 00:00:00.020 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.040 Fetching changes from the remote Git repository 00:00:00.042 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.070 Using shallow fetch with depth 1 00:00:00.070 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.070 > git --version # timeout=10 00:00:00.116 > git --version # 'git version 2.39.2' 00:00:00.116 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.191 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.191 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.505 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.519 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.534 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.534 > git config core.sparsecheckout # timeout=10 00:00:02.547 > git read-tree -mu HEAD # timeout=10 00:00:02.568 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.593 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.594 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.686 [Pipeline] Start of Pipeline 00:00:02.701 [Pipeline] library 00:00:02.702 Loading library shm_lib@master 00:00:02.702 Library shm_lib@master is cached. Copying from home. 00:00:02.717 [Pipeline] node 00:00:02.726 Running on VM-host-SM17 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.728 [Pipeline] { 00:00:02.739 [Pipeline] catchError 00:00:02.741 [Pipeline] { 00:00:02.754 [Pipeline] wrap 00:00:02.763 [Pipeline] { 00:00:02.772 [Pipeline] stage 00:00:02.774 [Pipeline] { (Prologue) 00:00:02.794 [Pipeline] echo 00:00:02.796 Node: VM-host-SM17 00:00:02.803 [Pipeline] cleanWs 00:00:02.813 [WS-CLEANUP] Deleting project workspace... 00:00:02.813 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.819 [WS-CLEANUP] done 00:00:03.018 [Pipeline] setCustomBuildProperty 00:00:03.106 [Pipeline] httpRequest 00:00:03.496 [Pipeline] echo 00:00:03.498 Sorcerer 10.211.164.20 is alive 00:00:03.506 [Pipeline] retry 00:00:03.508 [Pipeline] { 00:00:03.521 [Pipeline] httpRequest 00:00:03.526 HttpMethod: GET 00:00:03.526 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.527 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.536 Response Code: HTTP/1.1 200 OK 00:00:03.537 Success: Status code 200 is in the accepted range: 200,404 00:00:03.537 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.980 [Pipeline] } 00:00:07.991 [Pipeline] // retry 00:00:07.996 [Pipeline] sh 00:00:08.274 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.291 [Pipeline] httpRequest 00:00:08.912 [Pipeline] echo 00:00:08.914 Sorcerer 10.211.164.20 is alive 00:00:08.923 [Pipeline] retry 00:00:08.925 [Pipeline] { 00:00:08.939 [Pipeline] httpRequest 00:00:08.943 HttpMethod: GET 00:00:08.944 URL: http://10.211.164.20/packages/spdk_5c8d9922304f954f9b9612f124a8d7bc5102ca33.tar.gz 00:00:08.944 Sending request to url: http://10.211.164.20/packages/spdk_5c8d9922304f954f9b9612f124a8d7bc5102ca33.tar.gz 00:00:08.955 Response Code: HTTP/1.1 200 OK 00:00:08.956 Success: Status code 200 is in the accepted range: 200,404 00:00:08.957 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_5c8d9922304f954f9b9612f124a8d7bc5102ca33.tar.gz 00:01:42.012 [Pipeline] } 00:01:42.030 [Pipeline] // retry 00:01:42.041 [Pipeline] sh 00:01:42.324 + tar --no-same-owner -xf spdk_5c8d9922304f954f9b9612f124a8d7bc5102ca33.tar.gz 00:01:45.668 [Pipeline] sh 00:01:45.946 + git -C spdk log --oneline -n5 00:01:45.946 5c8d99223 bdev: Factor out checking bounce buffer necessity into helper function 00:01:45.946 d58114851 bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io 00:01:45.946 32c3f377c bdev: Use data_block_size for upper layer buffer if hide_metadata is true 00:01:45.946 d3dfde872 bdev: Add APIs get metadata config via desc depending on hide_metadata option 00:01:45.946 b6a8866f3 bdev: Add spdk_bdev_open_ext_v2() to support per-open options 00:01:45.964 [Pipeline] writeFile 00:01:45.979 [Pipeline] sh 00:01:46.256 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:46.266 [Pipeline] sh 00:01:46.543 + cat autorun-spdk.conf 00:01:46.543 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.543 SPDK_RUN_ASAN=1 00:01:46.543 SPDK_RUN_UBSAN=1 00:01:46.543 SPDK_TEST_RAID=1 00:01:46.543 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:46.549 RUN_NIGHTLY=0 00:01:46.551 [Pipeline] } 00:01:46.565 [Pipeline] // stage 00:01:46.579 [Pipeline] stage 00:01:46.581 [Pipeline] { (Run VM) 00:01:46.594 [Pipeline] sh 00:01:46.875 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:46.875 + echo 'Start stage prepare_nvme.sh' 00:01:46.875 Start stage prepare_nvme.sh 00:01:46.875 + [[ -n 6 ]] 00:01:46.875 + disk_prefix=ex6 00:01:46.875 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:46.875 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:46.875 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:46.875 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.875 ++ SPDK_RUN_ASAN=1 00:01:46.875 ++ SPDK_RUN_UBSAN=1 00:01:46.875 ++ SPDK_TEST_RAID=1 00:01:46.875 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:46.875 ++ RUN_NIGHTLY=0 00:01:46.875 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:46.875 + nvme_files=() 00:01:46.875 + declare -A nvme_files 00:01:46.875 + backend_dir=/var/lib/libvirt/images/backends 00:01:46.875 + nvme_files['nvme.img']=5G 00:01:46.875 + nvme_files['nvme-cmb.img']=5G 00:01:46.875 + nvme_files['nvme-multi0.img']=4G 00:01:46.875 + nvme_files['nvme-multi1.img']=4G 00:01:46.875 + nvme_files['nvme-multi2.img']=4G 00:01:46.875 + nvme_files['nvme-openstack.img']=8G 00:01:46.875 + nvme_files['nvme-zns.img']=5G 00:01:46.875 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:46.875 + (( SPDK_TEST_FTL == 1 )) 00:01:46.875 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:46.875 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:46.875 + for nvme in "${!nvme_files[@]}" 00:01:46.875 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:46.875 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:46.875 + for nvme in "${!nvme_files[@]}" 00:01:46.875 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:46.875 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:46.875 + for nvme in "${!nvme_files[@]}" 00:01:46.875 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:46.875 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:46.875 + for nvme in "${!nvme_files[@]}" 00:01:46.875 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:46.875 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:46.875 + for nvme in "${!nvme_files[@]}" 00:01:46.875 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:46.875 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:46.875 + for nvme in "${!nvme_files[@]}" 00:01:46.875 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:46.875 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:46.875 + for nvme in "${!nvme_files[@]}" 00:01:46.875 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:47.809 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:47.809 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:47.809 + echo 'End stage prepare_nvme.sh' 00:01:47.809 End stage prepare_nvme.sh 00:01:47.826 [Pipeline] sh 00:01:48.108 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:48.108 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:01:48.108 00:01:48.108 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:48.108 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:48.108 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:48.108 HELP=0 00:01:48.108 DRY_RUN=0 00:01:48.108 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:01:48.108 NVME_DISKS_TYPE=nvme,nvme, 00:01:48.108 NVME_AUTO_CREATE=0 00:01:48.108 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:01:48.108 NVME_CMB=,, 00:01:48.108 NVME_PMR=,, 00:01:48.108 NVME_ZNS=,, 00:01:48.108 NVME_MS=,, 00:01:48.108 NVME_FDP=,, 00:01:48.108 SPDK_VAGRANT_DISTRO=fedora39 00:01:48.108 SPDK_VAGRANT_VMCPU=10 00:01:48.108 SPDK_VAGRANT_VMRAM=12288 00:01:48.108 SPDK_VAGRANT_PROVIDER=libvirt 00:01:48.108 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:48.108 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:48.108 SPDK_OPENSTACK_NETWORK=0 00:01:48.108 VAGRANT_PACKAGE_BOX=0 00:01:48.108 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:48.108 FORCE_DISTRO=true 00:01:48.108 VAGRANT_BOX_VERSION= 00:01:48.108 EXTRA_VAGRANTFILES= 00:01:48.108 NIC_MODEL=e1000 00:01:48.108 00:01:48.108 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:48.108 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:51.394 Bringing machine 'default' up with 'libvirt' provider... 00:01:51.961 ==> default: Creating image (snapshot of base box volume). 00:01:52.220 ==> default: Creating domain with the following settings... 00:01:52.220 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732111950_fecbfb73e1e568bd304e 00:01:52.220 ==> default: -- Domain type: kvm 00:01:52.220 ==> default: -- Cpus: 10 00:01:52.220 ==> default: -- Feature: acpi 00:01:52.220 ==> default: -- Feature: apic 00:01:52.220 ==> default: -- Feature: pae 00:01:52.220 ==> default: -- Memory: 12288M 00:01:52.220 ==> default: -- Memory Backing: hugepages: 00:01:52.220 ==> default: -- Management MAC: 00:01:52.220 ==> default: -- Loader: 00:01:52.220 ==> default: -- Nvram: 00:01:52.220 ==> default: -- Base box: spdk/fedora39 00:01:52.220 ==> default: -- Storage pool: default 00:01:52.220 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732111950_fecbfb73e1e568bd304e.img (20G) 00:01:52.220 ==> default: -- Volume Cache: default 00:01:52.220 ==> default: -- Kernel: 00:01:52.220 ==> default: -- Initrd: 00:01:52.220 ==> default: -- Graphics Type: vnc 00:01:52.220 ==> default: -- Graphics Port: -1 00:01:52.220 ==> default: -- Graphics IP: 127.0.0.1 00:01:52.220 ==> default: -- Graphics Password: Not defined 00:01:52.220 ==> default: -- Video Type: cirrus 00:01:52.220 ==> default: -- Video VRAM: 9216 00:01:52.220 ==> default: -- Sound Type: 00:01:52.220 ==> default: -- Keymap: en-us 00:01:52.220 ==> default: -- TPM Path: 00:01:52.220 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:52.220 ==> default: -- Command line args: 00:01:52.220 ==> default: -> value=-device, 00:01:52.220 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:52.220 ==> default: -> value=-drive, 00:01:52.220 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:01:52.220 ==> default: -> value=-device, 00:01:52.220 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:52.220 ==> default: -> value=-device, 00:01:52.220 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:52.220 ==> default: -> value=-drive, 00:01:52.220 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:52.220 ==> default: -> value=-device, 00:01:52.220 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:52.220 ==> default: -> value=-drive, 00:01:52.220 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:52.220 ==> default: -> value=-device, 00:01:52.220 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:52.220 ==> default: -> value=-drive, 00:01:52.220 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:52.220 ==> default: -> value=-device, 00:01:52.220 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:52.479 ==> default: Creating shared folders metadata... 00:01:52.479 ==> default: Starting domain. 00:01:54.380 ==> default: Waiting for domain to get an IP address... 00:02:09.257 ==> default: Waiting for SSH to become available... 00:02:10.634 ==> default: Configuring and enabling network interfaces... 00:02:14.825 default: SSH address: 192.168.121.140:22 00:02:14.825 default: SSH username: vagrant 00:02:14.825 default: SSH auth method: private key 00:02:16.753 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:24.869 ==> default: Mounting SSHFS shared folder... 00:02:26.248 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:26.248 ==> default: Checking Mount.. 00:02:27.628 ==> default: Folder Successfully Mounted! 00:02:27.628 ==> default: Running provisioner: file... 00:02:28.565 default: ~/.gitconfig => .gitconfig 00:02:28.823 00:02:28.823 SUCCESS! 00:02:28.823 00:02:28.823 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:28.823 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:28.823 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:28.823 00:02:28.832 [Pipeline] } 00:02:28.850 [Pipeline] // stage 00:02:28.860 [Pipeline] dir 00:02:28.860 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:28.862 [Pipeline] { 00:02:28.875 [Pipeline] catchError 00:02:28.877 [Pipeline] { 00:02:28.890 [Pipeline] sh 00:02:29.169 + vagrant ssh-config --host vagrant 00:02:29.169 + sed -ne /^Host/,$p 00:02:29.169 + tee ssh_conf 00:02:33.388 Host vagrant 00:02:33.388 HostName 192.168.121.140 00:02:33.388 User vagrant 00:02:33.388 Port 22 00:02:33.388 UserKnownHostsFile /dev/null 00:02:33.388 StrictHostKeyChecking no 00:02:33.388 PasswordAuthentication no 00:02:33.388 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:33.388 IdentitiesOnly yes 00:02:33.388 LogLevel FATAL 00:02:33.388 ForwardAgent yes 00:02:33.388 ForwardX11 yes 00:02:33.388 00:02:33.401 [Pipeline] withEnv 00:02:33.403 [Pipeline] { 00:02:33.416 [Pipeline] sh 00:02:33.696 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:33.696 source /etc/os-release 00:02:33.696 [[ -e /image.version ]] && img=$(< /image.version) 00:02:33.696 # Minimal, systemd-like check. 00:02:33.696 if [[ -e /.dockerenv ]]; then 00:02:33.696 # Clear garbage from the node's name: 00:02:33.696 # agt-er_autotest_547-896 -> autotest_547-896 00:02:33.696 # $HOSTNAME is the actual container id 00:02:33.696 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:33.696 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:33.696 # We can assume this is a mount from a host where container is running, 00:02:33.696 # so fetch its hostname to easily identify the target swarm worker. 00:02:33.696 container="$(< /etc/hostname) ($agent)" 00:02:33.696 else 00:02:33.696 # Fallback 00:02:33.696 container=$agent 00:02:33.696 fi 00:02:33.696 fi 00:02:33.696 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:33.696 00:02:33.967 [Pipeline] } 00:02:33.984 [Pipeline] // withEnv 00:02:33.992 [Pipeline] setCustomBuildProperty 00:02:34.008 [Pipeline] stage 00:02:34.010 [Pipeline] { (Tests) 00:02:34.027 [Pipeline] sh 00:02:34.308 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:34.581 [Pipeline] sh 00:02:34.863 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:35.134 [Pipeline] timeout 00:02:35.134 Timeout set to expire in 1 hr 30 min 00:02:35.136 [Pipeline] { 00:02:35.147 [Pipeline] sh 00:02:35.424 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:35.990 HEAD is now at 5c8d99223 bdev: Factor out checking bounce buffer necessity into helper function 00:02:36.004 [Pipeline] sh 00:02:36.284 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:36.557 [Pipeline] sh 00:02:36.836 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:36.853 [Pipeline] sh 00:02:37.136 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:37.394 ++ readlink -f spdk_repo 00:02:37.394 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:37.394 + [[ -n /home/vagrant/spdk_repo ]] 00:02:37.394 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:37.394 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:37.394 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:37.394 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:37.394 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:37.394 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:37.394 + cd /home/vagrant/spdk_repo 00:02:37.394 + source /etc/os-release 00:02:37.394 ++ NAME='Fedora Linux' 00:02:37.394 ++ VERSION='39 (Cloud Edition)' 00:02:37.394 ++ ID=fedora 00:02:37.394 ++ VERSION_ID=39 00:02:37.394 ++ VERSION_CODENAME= 00:02:37.394 ++ PLATFORM_ID=platform:f39 00:02:37.394 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:37.394 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:37.394 ++ LOGO=fedora-logo-icon 00:02:37.394 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:37.394 ++ HOME_URL=https://fedoraproject.org/ 00:02:37.394 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:37.394 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:37.394 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:37.394 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:37.394 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:37.394 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:37.394 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:37.394 ++ SUPPORT_END=2024-11-12 00:02:37.394 ++ VARIANT='Cloud Edition' 00:02:37.394 ++ VARIANT_ID=cloud 00:02:37.394 + uname -a 00:02:37.394 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:37.394 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:37.653 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:37.911 Hugepages 00:02:37.911 node hugesize free / total 00:02:37.911 node0 1048576kB 0 / 0 00:02:37.911 node0 2048kB 0 / 0 00:02:37.911 00:02:37.911 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:37.911 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:37.911 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:37.911 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:37.911 + rm -f /tmp/spdk-ld-path 00:02:37.911 + source autorun-spdk.conf 00:02:37.911 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:37.911 ++ SPDK_RUN_ASAN=1 00:02:37.911 ++ SPDK_RUN_UBSAN=1 00:02:37.911 ++ SPDK_TEST_RAID=1 00:02:37.911 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:37.911 ++ RUN_NIGHTLY=0 00:02:37.911 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:37.911 + [[ -n '' ]] 00:02:37.911 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:37.911 + for M in /var/spdk/build-*-manifest.txt 00:02:37.911 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:37.911 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:37.911 + for M in /var/spdk/build-*-manifest.txt 00:02:37.911 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:37.911 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:37.911 + for M in /var/spdk/build-*-manifest.txt 00:02:37.911 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:37.911 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:37.911 ++ uname 00:02:37.911 + [[ Linux == \L\i\n\u\x ]] 00:02:37.911 + sudo dmesg -T 00:02:37.911 + sudo dmesg --clear 00:02:37.911 + dmesg_pid=5200 00:02:37.911 + sudo dmesg -Tw 00:02:37.911 + [[ Fedora Linux == FreeBSD ]] 00:02:37.911 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:37.911 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:37.911 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:37.911 + [[ -x /usr/src/fio-static/fio ]] 00:02:37.911 + export FIO_BIN=/usr/src/fio-static/fio 00:02:37.911 + FIO_BIN=/usr/src/fio-static/fio 00:02:37.911 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:37.911 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:37.911 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:37.911 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:37.911 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:37.911 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:37.911 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:37.911 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:37.911 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:38.171 14:13:16 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:38.171 14:13:16 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:38.171 14:13:16 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:38.171 14:13:16 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:38.171 14:13:16 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:38.171 14:13:16 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:38.171 14:13:16 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:38.171 14:13:16 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:02:38.171 14:13:16 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:38.171 14:13:16 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:38.171 14:13:16 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:38.171 14:13:16 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:38.171 14:13:16 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:38.171 14:13:16 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:38.171 14:13:16 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:38.171 14:13:16 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:38.171 14:13:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.171 14:13:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.171 14:13:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.171 14:13:16 -- paths/export.sh@5 -- $ export PATH 00:02:38.171 14:13:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.171 14:13:16 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:38.171 14:13:16 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:38.171 14:13:16 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732111996.XXXXXX 00:02:38.171 14:13:16 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732111996.Pp5kFk 00:02:38.171 14:13:16 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:38.171 14:13:16 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:38.171 14:13:16 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:38.171 14:13:16 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:38.171 14:13:16 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:38.171 14:13:17 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:38.171 14:13:17 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:38.171 14:13:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:38.171 14:13:17 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:02:38.171 14:13:17 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:38.171 14:13:17 -- pm/common@17 -- $ local monitor 00:02:38.171 14:13:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.171 14:13:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.171 14:13:17 -- pm/common@25 -- $ sleep 1 00:02:38.171 14:13:17 -- pm/common@21 -- $ date +%s 00:02:38.171 14:13:17 -- pm/common@21 -- $ date +%s 00:02:38.171 14:13:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732111997 00:02:38.171 14:13:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732111997 00:02:38.171 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732111997_collect-cpu-load.pm.log 00:02:38.171 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732111997_collect-vmstat.pm.log 00:02:39.108 14:13:18 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:39.108 14:13:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:39.108 14:13:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:39.108 14:13:18 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:39.108 14:13:18 -- spdk/autobuild.sh@16 -- $ date -u 00:02:39.108 Wed Nov 20 02:13:18 PM UTC 2024 00:02:39.108 14:13:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:39.108 v25.01-pre-225-g5c8d99223 00:02:39.108 14:13:18 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:39.108 14:13:18 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:39.108 14:13:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:39.108 14:13:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:39.108 14:13:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:39.108 ************************************ 00:02:39.108 START TEST asan 00:02:39.108 ************************************ 00:02:39.108 using asan 00:02:39.108 14:13:18 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:39.108 00:02:39.108 real 0m0.000s 00:02:39.108 user 0m0.000s 00:02:39.108 sys 0m0.000s 00:02:39.108 14:13:18 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:39.108 14:13:18 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:39.108 ************************************ 00:02:39.108 END TEST asan 00:02:39.108 ************************************ 00:02:39.367 14:13:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:39.367 14:13:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:39.367 14:13:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:39.367 14:13:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:39.367 14:13:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:39.367 ************************************ 00:02:39.367 START TEST ubsan 00:02:39.367 ************************************ 00:02:39.368 using ubsan 00:02:39.368 14:13:18 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:39.368 00:02:39.368 real 0m0.000s 00:02:39.368 user 0m0.000s 00:02:39.368 sys 0m0.000s 00:02:39.368 14:13:18 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:39.368 14:13:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:39.368 ************************************ 00:02:39.368 END TEST ubsan 00:02:39.368 ************************************ 00:02:39.368 14:13:18 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:39.368 14:13:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:39.368 14:13:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:39.368 14:13:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:39.368 14:13:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:39.368 14:13:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:39.368 14:13:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:39.368 14:13:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:39.368 14:13:18 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:02:39.368 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:39.368 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:39.934 Using 'verbs' RDMA provider 00:02:55.831 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:08.040 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:08.040 Creating mk/config.mk...done. 00:03:08.040 Creating mk/cc.flags.mk...done. 00:03:08.040 Type 'make' to build. 00:03:08.040 14:13:46 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:08.040 14:13:46 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:08.040 14:13:46 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:08.040 14:13:46 -- common/autotest_common.sh@10 -- $ set +x 00:03:08.040 ************************************ 00:03:08.040 START TEST make 00:03:08.040 ************************************ 00:03:08.040 14:13:46 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:08.040 make[1]: Nothing to be done for 'all'. 00:03:22.920 The Meson build system 00:03:22.920 Version: 1.5.0 00:03:22.920 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:22.920 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:22.920 Build type: native build 00:03:22.920 Program cat found: YES (/usr/bin/cat) 00:03:22.920 Project name: DPDK 00:03:22.920 Project version: 24.03.0 00:03:22.920 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:22.920 C linker for the host machine: cc ld.bfd 2.40-14 00:03:22.920 Host machine cpu family: x86_64 00:03:22.920 Host machine cpu: x86_64 00:03:22.920 Message: ## Building in Developer Mode ## 00:03:22.920 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:22.920 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:22.920 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:22.920 Program python3 found: YES (/usr/bin/python3) 00:03:22.920 Program cat found: YES (/usr/bin/cat) 00:03:22.920 Compiler for C supports arguments -march=native: YES 00:03:22.920 Checking for size of "void *" : 8 00:03:22.920 Checking for size of "void *" : 8 (cached) 00:03:22.920 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:22.920 Library m found: YES 00:03:22.920 Library numa found: YES 00:03:22.920 Has header "numaif.h" : YES 00:03:22.920 Library fdt found: NO 00:03:22.920 Library execinfo found: NO 00:03:22.920 Has header "execinfo.h" : YES 00:03:22.920 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:22.920 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:22.920 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:22.920 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:22.920 Run-time dependency openssl found: YES 3.1.1 00:03:22.920 Run-time dependency libpcap found: YES 1.10.4 00:03:22.920 Has header "pcap.h" with dependency libpcap: YES 00:03:22.920 Compiler for C supports arguments -Wcast-qual: YES 00:03:22.920 Compiler for C supports arguments -Wdeprecated: YES 00:03:22.920 Compiler for C supports arguments -Wformat: YES 00:03:22.920 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:22.920 Compiler for C supports arguments -Wformat-security: NO 00:03:22.920 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:22.920 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:22.920 Compiler for C supports arguments -Wnested-externs: YES 00:03:22.920 Compiler for C supports arguments -Wold-style-definition: YES 00:03:22.920 Compiler for C supports arguments -Wpointer-arith: YES 00:03:22.920 Compiler for C supports arguments -Wsign-compare: YES 00:03:22.920 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:22.920 Compiler for C supports arguments -Wundef: YES 00:03:22.920 Compiler for C supports arguments -Wwrite-strings: YES 00:03:22.920 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:22.920 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:22.920 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:22.920 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:22.920 Program objdump found: YES (/usr/bin/objdump) 00:03:22.920 Compiler for C supports arguments -mavx512f: YES 00:03:22.920 Checking if "AVX512 checking" compiles: YES 00:03:22.920 Fetching value of define "__SSE4_2__" : 1 00:03:22.920 Fetching value of define "__AES__" : 1 00:03:22.920 Fetching value of define "__AVX__" : 1 00:03:22.920 Fetching value of define "__AVX2__" : 1 00:03:22.920 Fetching value of define "__AVX512BW__" : (undefined) 00:03:22.920 Fetching value of define "__AVX512CD__" : (undefined) 00:03:22.920 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:22.920 Fetching value of define "__AVX512F__" : (undefined) 00:03:22.920 Fetching value of define "__AVX512VL__" : (undefined) 00:03:22.920 Fetching value of define "__PCLMUL__" : 1 00:03:22.920 Fetching value of define "__RDRND__" : 1 00:03:22.920 Fetching value of define "__RDSEED__" : 1 00:03:22.920 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:22.920 Fetching value of define "__znver1__" : (undefined) 00:03:22.920 Fetching value of define "__znver2__" : (undefined) 00:03:22.920 Fetching value of define "__znver3__" : (undefined) 00:03:22.920 Fetching value of define "__znver4__" : (undefined) 00:03:22.920 Library asan found: YES 00:03:22.920 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:22.920 Message: lib/log: Defining dependency "log" 00:03:22.920 Message: lib/kvargs: Defining dependency "kvargs" 00:03:22.920 Message: lib/telemetry: Defining dependency "telemetry" 00:03:22.920 Library rt found: YES 00:03:22.920 Checking for function "getentropy" : NO 00:03:22.920 Message: lib/eal: Defining dependency "eal" 00:03:22.920 Message: lib/ring: Defining dependency "ring" 00:03:22.920 Message: lib/rcu: Defining dependency "rcu" 00:03:22.920 Message: lib/mempool: Defining dependency "mempool" 00:03:22.920 Message: lib/mbuf: Defining dependency "mbuf" 00:03:22.920 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:22.920 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:22.920 Compiler for C supports arguments -mpclmul: YES 00:03:22.920 Compiler for C supports arguments -maes: YES 00:03:22.920 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:22.920 Compiler for C supports arguments -mavx512bw: YES 00:03:22.920 Compiler for C supports arguments -mavx512dq: YES 00:03:22.920 Compiler for C supports arguments -mavx512vl: YES 00:03:22.920 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:22.920 Compiler for C supports arguments -mavx2: YES 00:03:22.921 Compiler for C supports arguments -mavx: YES 00:03:22.921 Message: lib/net: Defining dependency "net" 00:03:22.921 Message: lib/meter: Defining dependency "meter" 00:03:22.921 Message: lib/ethdev: Defining dependency "ethdev" 00:03:22.921 Message: lib/pci: Defining dependency "pci" 00:03:22.921 Message: lib/cmdline: Defining dependency "cmdline" 00:03:22.921 Message: lib/hash: Defining dependency "hash" 00:03:22.921 Message: lib/timer: Defining dependency "timer" 00:03:22.921 Message: lib/compressdev: Defining dependency "compressdev" 00:03:22.921 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:22.921 Message: lib/dmadev: Defining dependency "dmadev" 00:03:22.921 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:22.921 Message: lib/power: Defining dependency "power" 00:03:22.921 Message: lib/reorder: Defining dependency "reorder" 00:03:22.921 Message: lib/security: Defining dependency "security" 00:03:22.921 Has header "linux/userfaultfd.h" : YES 00:03:22.921 Has header "linux/vduse.h" : YES 00:03:22.921 Message: lib/vhost: Defining dependency "vhost" 00:03:22.921 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:22.921 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:22.921 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:22.921 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:22.921 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:22.921 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:22.921 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:22.921 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:22.921 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:22.921 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:22.921 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:22.921 Configuring doxy-api-html.conf using configuration 00:03:22.921 Configuring doxy-api-man.conf using configuration 00:03:22.921 Program mandb found: YES (/usr/bin/mandb) 00:03:22.921 Program sphinx-build found: NO 00:03:22.921 Configuring rte_build_config.h using configuration 00:03:22.921 Message: 00:03:22.921 ================= 00:03:22.921 Applications Enabled 00:03:22.921 ================= 00:03:22.921 00:03:22.921 apps: 00:03:22.921 00:03:22.921 00:03:22.921 Message: 00:03:22.921 ================= 00:03:22.921 Libraries Enabled 00:03:22.921 ================= 00:03:22.921 00:03:22.921 libs: 00:03:22.921 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:22.921 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:22.921 cryptodev, dmadev, power, reorder, security, vhost, 00:03:22.921 00:03:22.921 Message: 00:03:22.921 =============== 00:03:22.921 Drivers Enabled 00:03:22.921 =============== 00:03:22.921 00:03:22.921 common: 00:03:22.921 00:03:22.921 bus: 00:03:22.921 pci, vdev, 00:03:22.921 mempool: 00:03:22.921 ring, 00:03:22.921 dma: 00:03:22.921 00:03:22.921 net: 00:03:22.921 00:03:22.921 crypto: 00:03:22.921 00:03:22.921 compress: 00:03:22.921 00:03:22.921 vdpa: 00:03:22.921 00:03:22.921 00:03:22.921 Message: 00:03:22.921 ================= 00:03:22.921 Content Skipped 00:03:22.921 ================= 00:03:22.921 00:03:22.921 apps: 00:03:22.921 dumpcap: explicitly disabled via build config 00:03:22.921 graph: explicitly disabled via build config 00:03:22.921 pdump: explicitly disabled via build config 00:03:22.921 proc-info: explicitly disabled via build config 00:03:22.921 test-acl: explicitly disabled via build config 00:03:22.921 test-bbdev: explicitly disabled via build config 00:03:22.921 test-cmdline: explicitly disabled via build config 00:03:22.921 test-compress-perf: explicitly disabled via build config 00:03:22.921 test-crypto-perf: explicitly disabled via build config 00:03:22.921 test-dma-perf: explicitly disabled via build config 00:03:22.921 test-eventdev: explicitly disabled via build config 00:03:22.921 test-fib: explicitly disabled via build config 00:03:22.921 test-flow-perf: explicitly disabled via build config 00:03:22.921 test-gpudev: explicitly disabled via build config 00:03:22.921 test-mldev: explicitly disabled via build config 00:03:22.921 test-pipeline: explicitly disabled via build config 00:03:22.921 test-pmd: explicitly disabled via build config 00:03:22.921 test-regex: explicitly disabled via build config 00:03:22.921 test-sad: explicitly disabled via build config 00:03:22.921 test-security-perf: explicitly disabled via build config 00:03:22.921 00:03:22.921 libs: 00:03:22.921 argparse: explicitly disabled via build config 00:03:22.921 metrics: explicitly disabled via build config 00:03:22.921 acl: explicitly disabled via build config 00:03:22.921 bbdev: explicitly disabled via build config 00:03:22.921 bitratestats: explicitly disabled via build config 00:03:22.921 bpf: explicitly disabled via build config 00:03:22.921 cfgfile: explicitly disabled via build config 00:03:22.921 distributor: explicitly disabled via build config 00:03:22.921 efd: explicitly disabled via build config 00:03:22.921 eventdev: explicitly disabled via build config 00:03:22.921 dispatcher: explicitly disabled via build config 00:03:22.921 gpudev: explicitly disabled via build config 00:03:22.921 gro: explicitly disabled via build config 00:03:22.921 gso: explicitly disabled via build config 00:03:22.921 ip_frag: explicitly disabled via build config 00:03:22.921 jobstats: explicitly disabled via build config 00:03:22.921 latencystats: explicitly disabled via build config 00:03:22.921 lpm: explicitly disabled via build config 00:03:22.921 member: explicitly disabled via build config 00:03:22.921 pcapng: explicitly disabled via build config 00:03:22.921 rawdev: explicitly disabled via build config 00:03:22.921 regexdev: explicitly disabled via build config 00:03:22.921 mldev: explicitly disabled via build config 00:03:22.921 rib: explicitly disabled via build config 00:03:22.921 sched: explicitly disabled via build config 00:03:22.921 stack: explicitly disabled via build config 00:03:22.921 ipsec: explicitly disabled via build config 00:03:22.921 pdcp: explicitly disabled via build config 00:03:22.921 fib: explicitly disabled via build config 00:03:22.921 port: explicitly disabled via build config 00:03:22.921 pdump: explicitly disabled via build config 00:03:22.921 table: explicitly disabled via build config 00:03:22.921 pipeline: explicitly disabled via build config 00:03:22.921 graph: explicitly disabled via build config 00:03:22.921 node: explicitly disabled via build config 00:03:22.921 00:03:22.921 drivers: 00:03:22.921 common/cpt: not in enabled drivers build config 00:03:22.921 common/dpaax: not in enabled drivers build config 00:03:22.921 common/iavf: not in enabled drivers build config 00:03:22.921 common/idpf: not in enabled drivers build config 00:03:22.921 common/ionic: not in enabled drivers build config 00:03:22.921 common/mvep: not in enabled drivers build config 00:03:22.921 common/octeontx: not in enabled drivers build config 00:03:22.921 bus/auxiliary: not in enabled drivers build config 00:03:22.921 bus/cdx: not in enabled drivers build config 00:03:22.921 bus/dpaa: not in enabled drivers build config 00:03:22.921 bus/fslmc: not in enabled drivers build config 00:03:22.921 bus/ifpga: not in enabled drivers build config 00:03:22.921 bus/platform: not in enabled drivers build config 00:03:22.921 bus/uacce: not in enabled drivers build config 00:03:22.921 bus/vmbus: not in enabled drivers build config 00:03:22.921 common/cnxk: not in enabled drivers build config 00:03:22.921 common/mlx5: not in enabled drivers build config 00:03:22.921 common/nfp: not in enabled drivers build config 00:03:22.921 common/nitrox: not in enabled drivers build config 00:03:22.921 common/qat: not in enabled drivers build config 00:03:22.921 common/sfc_efx: not in enabled drivers build config 00:03:22.921 mempool/bucket: not in enabled drivers build config 00:03:22.921 mempool/cnxk: not in enabled drivers build config 00:03:22.921 mempool/dpaa: not in enabled drivers build config 00:03:22.921 mempool/dpaa2: not in enabled drivers build config 00:03:22.921 mempool/octeontx: not in enabled drivers build config 00:03:22.921 mempool/stack: not in enabled drivers build config 00:03:22.921 dma/cnxk: not in enabled drivers build config 00:03:22.921 dma/dpaa: not in enabled drivers build config 00:03:22.921 dma/dpaa2: not in enabled drivers build config 00:03:22.921 dma/hisilicon: not in enabled drivers build config 00:03:22.921 dma/idxd: not in enabled drivers build config 00:03:22.921 dma/ioat: not in enabled drivers build config 00:03:22.921 dma/skeleton: not in enabled drivers build config 00:03:22.921 net/af_packet: not in enabled drivers build config 00:03:22.921 net/af_xdp: not in enabled drivers build config 00:03:22.921 net/ark: not in enabled drivers build config 00:03:22.921 net/atlantic: not in enabled drivers build config 00:03:22.921 net/avp: not in enabled drivers build config 00:03:22.921 net/axgbe: not in enabled drivers build config 00:03:22.921 net/bnx2x: not in enabled drivers build config 00:03:22.921 net/bnxt: not in enabled drivers build config 00:03:22.921 net/bonding: not in enabled drivers build config 00:03:22.921 net/cnxk: not in enabled drivers build config 00:03:22.921 net/cpfl: not in enabled drivers build config 00:03:22.921 net/cxgbe: not in enabled drivers build config 00:03:22.921 net/dpaa: not in enabled drivers build config 00:03:22.921 net/dpaa2: not in enabled drivers build config 00:03:22.921 net/e1000: not in enabled drivers build config 00:03:22.921 net/ena: not in enabled drivers build config 00:03:22.921 net/enetc: not in enabled drivers build config 00:03:22.921 net/enetfec: not in enabled drivers build config 00:03:22.921 net/enic: not in enabled drivers build config 00:03:22.921 net/failsafe: not in enabled drivers build config 00:03:22.921 net/fm10k: not in enabled drivers build config 00:03:22.921 net/gve: not in enabled drivers build config 00:03:22.921 net/hinic: not in enabled drivers build config 00:03:22.921 net/hns3: not in enabled drivers build config 00:03:22.921 net/i40e: not in enabled drivers build config 00:03:22.921 net/iavf: not in enabled drivers build config 00:03:22.921 net/ice: not in enabled drivers build config 00:03:22.921 net/idpf: not in enabled drivers build config 00:03:22.921 net/igc: not in enabled drivers build config 00:03:22.921 net/ionic: not in enabled drivers build config 00:03:22.921 net/ipn3ke: not in enabled drivers build config 00:03:22.922 net/ixgbe: not in enabled drivers build config 00:03:22.922 net/mana: not in enabled drivers build config 00:03:22.922 net/memif: not in enabled drivers build config 00:03:22.922 net/mlx4: not in enabled drivers build config 00:03:22.922 net/mlx5: not in enabled drivers build config 00:03:22.922 net/mvneta: not in enabled drivers build config 00:03:22.922 net/mvpp2: not in enabled drivers build config 00:03:22.922 net/netvsc: not in enabled drivers build config 00:03:22.922 net/nfb: not in enabled drivers build config 00:03:22.922 net/nfp: not in enabled drivers build config 00:03:22.922 net/ngbe: not in enabled drivers build config 00:03:22.922 net/null: not in enabled drivers build config 00:03:22.922 net/octeontx: not in enabled drivers build config 00:03:22.922 net/octeon_ep: not in enabled drivers build config 00:03:22.922 net/pcap: not in enabled drivers build config 00:03:22.922 net/pfe: not in enabled drivers build config 00:03:22.922 net/qede: not in enabled drivers build config 00:03:22.922 net/ring: not in enabled drivers build config 00:03:22.922 net/sfc: not in enabled drivers build config 00:03:22.922 net/softnic: not in enabled drivers build config 00:03:22.922 net/tap: not in enabled drivers build config 00:03:22.922 net/thunderx: not in enabled drivers build config 00:03:22.922 net/txgbe: not in enabled drivers build config 00:03:22.922 net/vdev_netvsc: not in enabled drivers build config 00:03:22.922 net/vhost: not in enabled drivers build config 00:03:22.922 net/virtio: not in enabled drivers build config 00:03:22.922 net/vmxnet3: not in enabled drivers build config 00:03:22.922 raw/*: missing internal dependency, "rawdev" 00:03:22.922 crypto/armv8: not in enabled drivers build config 00:03:22.922 crypto/bcmfs: not in enabled drivers build config 00:03:22.922 crypto/caam_jr: not in enabled drivers build config 00:03:22.922 crypto/ccp: not in enabled drivers build config 00:03:22.922 crypto/cnxk: not in enabled drivers build config 00:03:22.922 crypto/dpaa_sec: not in enabled drivers build config 00:03:22.922 crypto/dpaa2_sec: not in enabled drivers build config 00:03:22.922 crypto/ipsec_mb: not in enabled drivers build config 00:03:22.922 crypto/mlx5: not in enabled drivers build config 00:03:22.922 crypto/mvsam: not in enabled drivers build config 00:03:22.922 crypto/nitrox: not in enabled drivers build config 00:03:22.922 crypto/null: not in enabled drivers build config 00:03:22.922 crypto/octeontx: not in enabled drivers build config 00:03:22.922 crypto/openssl: not in enabled drivers build config 00:03:22.922 crypto/scheduler: not in enabled drivers build config 00:03:22.922 crypto/uadk: not in enabled drivers build config 00:03:22.922 crypto/virtio: not in enabled drivers build config 00:03:22.922 compress/isal: not in enabled drivers build config 00:03:22.922 compress/mlx5: not in enabled drivers build config 00:03:22.922 compress/nitrox: not in enabled drivers build config 00:03:22.922 compress/octeontx: not in enabled drivers build config 00:03:22.922 compress/zlib: not in enabled drivers build config 00:03:22.922 regex/*: missing internal dependency, "regexdev" 00:03:22.922 ml/*: missing internal dependency, "mldev" 00:03:22.922 vdpa/ifc: not in enabled drivers build config 00:03:22.922 vdpa/mlx5: not in enabled drivers build config 00:03:22.922 vdpa/nfp: not in enabled drivers build config 00:03:22.922 vdpa/sfc: not in enabled drivers build config 00:03:22.922 event/*: missing internal dependency, "eventdev" 00:03:22.922 baseband/*: missing internal dependency, "bbdev" 00:03:22.922 gpu/*: missing internal dependency, "gpudev" 00:03:22.922 00:03:22.922 00:03:22.922 Build targets in project: 85 00:03:22.922 00:03:22.922 DPDK 24.03.0 00:03:22.922 00:03:22.922 User defined options 00:03:22.922 buildtype : debug 00:03:22.922 default_library : shared 00:03:22.922 libdir : lib 00:03:22.922 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:22.922 b_sanitize : address 00:03:22.922 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:22.922 c_link_args : 00:03:22.922 cpu_instruction_set: native 00:03:22.922 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:22.922 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:22.922 enable_docs : false 00:03:22.922 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:22.922 enable_kmods : false 00:03:22.922 max_lcores : 128 00:03:22.922 tests : false 00:03:22.922 00:03:22.922 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:22.922 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:22.922 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:22.922 [2/268] Linking static target lib/librte_kvargs.a 00:03:22.922 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:22.922 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:22.922 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:22.922 [6/268] Linking static target lib/librte_log.a 00:03:23.489 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:23.489 [8/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.489 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:23.489 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:23.489 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:23.747 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:23.747 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:23.747 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:23.747 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:23.747 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:23.747 [17/268] Linking static target lib/librte_telemetry.a 00:03:23.747 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:24.006 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.006 [20/268] Linking target lib/librte_log.so.24.1 00:03:24.290 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:24.290 [22/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:24.290 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:24.585 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:24.585 [25/268] Linking target lib/librte_kvargs.so.24.1 00:03:24.843 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:24.843 [27/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.843 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:24.843 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:24.843 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:24.843 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:24.843 [32/268] Linking target lib/librte_telemetry.so.24.1 00:03:24.843 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:25.101 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:25.101 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:25.359 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:25.618 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:25.618 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:25.618 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:25.877 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:25.877 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:25.877 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:25.877 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:25.877 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:25.877 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:26.135 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:26.135 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:26.135 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:26.135 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:26.135 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:26.703 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:26.703 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:26.962 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:26.962 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:26.962 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:26.962 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:26.962 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:26.962 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:27.221 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:27.221 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:27.221 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:27.789 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:27.789 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:28.047 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:28.047 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:28.047 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:28.047 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:28.047 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:28.306 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:28.306 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:28.306 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:28.306 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:28.306 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:28.306 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:28.564 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:28.564 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:28.564 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:28.823 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:28.823 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:28.823 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:28.823 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:28.823 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:29.121 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:29.121 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:29.121 [85/268] Linking static target lib/librte_eal.a 00:03:29.417 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:29.417 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:29.675 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:29.675 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:29.675 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:29.675 [91/268] Linking static target lib/librte_rcu.a 00:03:29.675 [92/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:29.675 [93/268] Linking static target lib/librte_ring.a 00:03:29.675 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:29.675 [95/268] Linking static target lib/librte_mempool.a 00:03:29.933 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:29.933 [97/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.933 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:30.191 [99/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.191 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:30.191 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:30.191 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:30.758 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:30.758 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:30.758 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:30.758 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:30.758 [107/268] Linking static target lib/librte_net.a 00:03:31.017 [108/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.017 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:31.017 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:31.017 [111/268] Linking static target lib/librte_meter.a 00:03:31.274 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:31.274 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.274 [114/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:31.274 [115/268] Linking static target lib/librte_mbuf.a 00:03:31.274 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:31.532 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:31.532 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.096 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:32.096 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:32.353 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:32.353 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.353 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:32.611 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:32.611 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:32.611 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:32.611 [127/268] Linking static target lib/librte_pci.a 00:03:32.611 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:32.868 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:32.868 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:32.868 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:32.868 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:33.127 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:33.127 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:33.127 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.127 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:33.127 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:33.127 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:33.127 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:33.127 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:33.386 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:33.386 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:33.386 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:33.386 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:33.386 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:33.644 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:33.644 [147/268] Linking static target lib/librte_cmdline.a 00:03:33.903 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:34.162 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:34.162 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:34.162 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:34.162 [152/268] Linking static target lib/librte_timer.a 00:03:34.162 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:34.422 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:34.709 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:34.709 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:34.987 [157/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.987 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:34.987 [159/268] Linking static target lib/librte_compressdev.a 00:03:34.987 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:34.987 [161/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:35.245 [162/268] Linking static target lib/librte_ethdev.a 00:03:35.245 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:35.245 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:35.245 [165/268] Linking static target lib/librte_hash.a 00:03:35.504 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:35.504 [167/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.504 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:35.504 [169/268] Linking static target lib/librte_dmadev.a 00:03:35.504 [170/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:35.762 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:35.762 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:36.020 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:36.020 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.278 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:36.537 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:36.537 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.537 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:36.537 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.537 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:36.795 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:36.795 [182/268] Linking static target lib/librte_cryptodev.a 00:03:36.795 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:37.053 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:37.312 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:37.312 [186/268] Linking static target lib/librte_power.a 00:03:37.570 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:37.570 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:37.570 [189/268] Linking static target lib/librte_reorder.a 00:03:37.570 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:37.570 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:37.828 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:37.828 [193/268] Linking static target lib/librte_security.a 00:03:38.396 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.396 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:38.962 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.962 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.962 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:38.962 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:39.221 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:39.480 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:39.738 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.738 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:39.738 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:39.998 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:39.998 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:39.998 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:40.262 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:40.262 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:40.520 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:40.520 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:40.780 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:40.780 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:40.780 [214/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:40.780 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:40.780 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:40.780 [217/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:40.780 [218/268] Linking static target drivers/librte_bus_pci.a 00:03:40.780 [219/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:40.780 [220/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:40.780 [221/268] Linking static target drivers/librte_bus_vdev.a 00:03:41.039 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:41.039 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:41.039 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:41.039 [225/268] Linking static target drivers/librte_mempool_ring.a 00:03:41.039 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.298 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.866 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:42.125 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.383 [230/268] Linking target lib/librte_eal.so.24.1 00:03:42.383 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:42.383 [232/268] Linking target lib/librte_meter.so.24.1 00:03:42.383 [233/268] Linking target lib/librte_ring.so.24.1 00:03:42.383 [234/268] Linking target lib/librte_dmadev.so.24.1 00:03:42.383 [235/268] Linking target lib/librte_pci.so.24.1 00:03:42.642 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:42.642 [237/268] Linking target lib/librte_timer.so.24.1 00:03:42.642 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:42.642 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:42.642 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:42.642 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:42.642 [242/268] Linking target lib/librte_rcu.so.24.1 00:03:42.642 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:42.642 [244/268] Linking target lib/librte_mempool.so.24.1 00:03:42.642 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:42.901 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:42.901 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:42.901 [248/268] Linking target lib/librte_mbuf.so.24.1 00:03:42.901 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:43.159 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:43.159 [251/268] Linking target lib/librte_reorder.so.24.1 00:03:43.159 [252/268] Linking target lib/librte_net.so.24.1 00:03:43.159 [253/268] Linking target lib/librte_compressdev.so.24.1 00:03:43.159 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:03:43.159 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:43.159 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:43.418 [257/268] Linking target lib/librte_hash.so.24.1 00:03:43.418 [258/268] Linking target lib/librte_cmdline.so.24.1 00:03:43.418 [259/268] Linking target lib/librte_security.so.24.1 00:03:43.418 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:43.986 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.986 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:44.245 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:44.245 [264/268] Linking target lib/librte_power.so.24.1 00:03:46.775 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:46.775 [266/268] Linking static target lib/librte_vhost.a 00:03:48.149 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.149 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:48.149 INFO: autodetecting backend as ninja 00:03:48.149 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:14.696 CC lib/ut_mock/mock.o 00:04:14.696 CC lib/ut/ut.o 00:04:14.696 CC lib/log/log.o 00:04:14.696 CC lib/log/log_flags.o 00:04:14.696 CC lib/log/log_deprecated.o 00:04:14.696 LIB libspdk_ut_mock.a 00:04:14.696 LIB libspdk_ut.a 00:04:14.696 SO libspdk_ut_mock.so.6.0 00:04:14.696 LIB libspdk_log.a 00:04:14.696 SO libspdk_ut.so.2.0 00:04:14.696 SYMLINK libspdk_ut_mock.so 00:04:14.696 SO libspdk_log.so.7.1 00:04:14.696 SYMLINK libspdk_ut.so 00:04:14.696 SYMLINK libspdk_log.so 00:04:14.696 CC lib/ioat/ioat.o 00:04:14.696 CXX lib/trace_parser/trace.o 00:04:14.696 CC lib/dma/dma.o 00:04:14.696 CC lib/util/base64.o 00:04:14.696 CC lib/util/bit_array.o 00:04:14.696 CC lib/util/cpuset.o 00:04:14.696 CC lib/util/crc32.o 00:04:14.696 CC lib/util/crc16.o 00:04:14.696 CC lib/util/crc32c.o 00:04:14.696 CC lib/vfio_user/host/vfio_user_pci.o 00:04:14.696 CC lib/util/crc32_ieee.o 00:04:14.696 CC lib/util/crc64.o 00:04:14.696 CC lib/util/dif.o 00:04:14.696 CC lib/util/fd.o 00:04:14.696 CC lib/util/fd_group.o 00:04:14.696 LIB libspdk_dma.a 00:04:14.696 LIB libspdk_ioat.a 00:04:14.696 SO libspdk_dma.so.5.0 00:04:14.696 SO libspdk_ioat.so.7.0 00:04:14.696 CC lib/util/file.o 00:04:14.696 SYMLINK libspdk_dma.so 00:04:14.696 CC lib/util/hexlify.o 00:04:14.696 CC lib/vfio_user/host/vfio_user.o 00:04:14.696 SYMLINK libspdk_ioat.so 00:04:14.696 CC lib/util/iov.o 00:04:14.696 CC lib/util/math.o 00:04:14.696 CC lib/util/net.o 00:04:14.696 CC lib/util/pipe.o 00:04:14.696 CC lib/util/strerror_tls.o 00:04:14.696 CC lib/util/string.o 00:04:14.696 CC lib/util/uuid.o 00:04:14.696 CC lib/util/xor.o 00:04:14.696 LIB libspdk_vfio_user.a 00:04:14.696 CC lib/util/zipf.o 00:04:14.696 CC lib/util/md5.o 00:04:14.696 SO libspdk_vfio_user.so.5.0 00:04:14.696 SYMLINK libspdk_vfio_user.so 00:04:14.696 LIB libspdk_trace_parser.a 00:04:14.696 LIB libspdk_util.a 00:04:14.696 SO libspdk_trace_parser.so.6.0 00:04:14.696 SO libspdk_util.so.10.1 00:04:14.696 SYMLINK libspdk_trace_parser.so 00:04:14.696 SYMLINK libspdk_util.so 00:04:14.696 CC lib/rdma_utils/rdma_utils.o 00:04:14.696 CC lib/json/json_parse.o 00:04:14.696 CC lib/conf/conf.o 00:04:14.696 CC lib/json/json_util.o 00:04:14.696 CC lib/json/json_write.o 00:04:14.696 CC lib/env_dpdk/env.o 00:04:14.696 CC lib/env_dpdk/memory.o 00:04:14.696 CC lib/idxd/idxd.o 00:04:14.696 CC lib/env_dpdk/pci.o 00:04:14.696 CC lib/vmd/vmd.o 00:04:14.696 LIB libspdk_conf.a 00:04:14.696 CC lib/vmd/led.o 00:04:14.696 SO libspdk_conf.so.6.0 00:04:14.696 CC lib/idxd/idxd_user.o 00:04:14.696 LIB libspdk_rdma_utils.a 00:04:14.696 SO libspdk_rdma_utils.so.1.0 00:04:14.696 LIB libspdk_json.a 00:04:14.696 SYMLINK libspdk_conf.so 00:04:14.696 CC lib/idxd/idxd_kernel.o 00:04:14.696 SO libspdk_json.so.6.0 00:04:14.696 SYMLINK libspdk_rdma_utils.so 00:04:14.696 CC lib/env_dpdk/init.o 00:04:14.696 SYMLINK libspdk_json.so 00:04:14.955 CC lib/env_dpdk/threads.o 00:04:14.955 CC lib/rdma_provider/common.o 00:04:14.955 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:14.955 CC lib/env_dpdk/pci_ioat.o 00:04:14.955 CC lib/env_dpdk/pci_virtio.o 00:04:15.213 CC lib/env_dpdk/pci_vmd.o 00:04:15.213 CC lib/jsonrpc/jsonrpc_server.o 00:04:15.213 CC lib/env_dpdk/pci_idxd.o 00:04:15.213 CC lib/env_dpdk/pci_event.o 00:04:15.213 CC lib/env_dpdk/sigbus_handler.o 00:04:15.213 LIB libspdk_idxd.a 00:04:15.472 SO libspdk_idxd.so.12.1 00:04:15.472 LIB libspdk_rdma_provider.a 00:04:15.472 SO libspdk_rdma_provider.so.7.0 00:04:15.472 SYMLINK libspdk_idxd.so 00:04:15.472 CC lib/env_dpdk/pci_dpdk.o 00:04:15.472 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:15.472 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:15.472 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:15.472 CC lib/jsonrpc/jsonrpc_client.o 00:04:15.472 SYMLINK libspdk_rdma_provider.so 00:04:15.472 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:15.472 LIB libspdk_vmd.a 00:04:15.730 SO libspdk_vmd.so.6.0 00:04:15.730 SYMLINK libspdk_vmd.so 00:04:15.989 LIB libspdk_jsonrpc.a 00:04:15.989 SO libspdk_jsonrpc.so.6.0 00:04:15.989 SYMLINK libspdk_jsonrpc.so 00:04:16.249 CC lib/rpc/rpc.o 00:04:16.249 LIB libspdk_env_dpdk.a 00:04:16.508 SO libspdk_env_dpdk.so.15.1 00:04:16.508 LIB libspdk_rpc.a 00:04:16.508 SO libspdk_rpc.so.6.0 00:04:16.508 SYMLINK libspdk_rpc.so 00:04:16.508 SYMLINK libspdk_env_dpdk.so 00:04:16.766 CC lib/keyring/keyring.o 00:04:16.766 CC lib/keyring/keyring_rpc.o 00:04:16.766 CC lib/notify/notify_rpc.o 00:04:16.766 CC lib/notify/notify.o 00:04:16.766 CC lib/trace/trace.o 00:04:16.766 CC lib/trace/trace_flags.o 00:04:16.766 CC lib/trace/trace_rpc.o 00:04:17.025 LIB libspdk_notify.a 00:04:17.025 SO libspdk_notify.so.6.0 00:04:17.025 LIB libspdk_trace.a 00:04:17.025 SYMLINK libspdk_notify.so 00:04:17.284 LIB libspdk_keyring.a 00:04:17.284 SO libspdk_trace.so.11.0 00:04:17.284 SO libspdk_keyring.so.2.0 00:04:17.284 SYMLINK libspdk_trace.so 00:04:17.284 SYMLINK libspdk_keyring.so 00:04:17.544 CC lib/thread/thread.o 00:04:17.544 CC lib/thread/iobuf.o 00:04:17.544 CC lib/sock/sock.o 00:04:17.544 CC lib/sock/sock_rpc.o 00:04:18.114 LIB libspdk_sock.a 00:04:18.114 SO libspdk_sock.so.10.0 00:04:18.114 SYMLINK libspdk_sock.so 00:04:18.373 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:18.373 CC lib/nvme/nvme_fabric.o 00:04:18.373 CC lib/nvme/nvme_ctrlr.o 00:04:18.373 CC lib/nvme/nvme_ns_cmd.o 00:04:18.373 CC lib/nvme/nvme_ns.o 00:04:18.373 CC lib/nvme/nvme_pcie_common.o 00:04:18.373 CC lib/nvme/nvme_qpair.o 00:04:18.373 CC lib/nvme/nvme_pcie.o 00:04:18.373 CC lib/nvme/nvme.o 00:04:19.314 CC lib/nvme/nvme_quirks.o 00:04:19.314 CC lib/nvme/nvme_transport.o 00:04:19.314 CC lib/nvme/nvme_discovery.o 00:04:19.572 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:19.572 LIB libspdk_thread.a 00:04:19.572 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:19.572 SO libspdk_thread.so.11.0 00:04:19.830 CC lib/nvme/nvme_tcp.o 00:04:19.830 SYMLINK libspdk_thread.so 00:04:19.830 CC lib/nvme/nvme_opal.o 00:04:19.830 CC lib/nvme/nvme_io_msg.o 00:04:20.089 CC lib/nvme/nvme_poll_group.o 00:04:20.089 CC lib/accel/accel.o 00:04:20.347 CC lib/accel/accel_rpc.o 00:04:20.347 CC lib/accel/accel_sw.o 00:04:20.347 CC lib/nvme/nvme_zns.o 00:04:20.347 CC lib/blob/blobstore.o 00:04:20.605 CC lib/blob/request.o 00:04:20.864 CC lib/nvme/nvme_stubs.o 00:04:20.864 CC lib/nvme/nvme_auth.o 00:04:20.864 CC lib/blob/zeroes.o 00:04:20.864 CC lib/nvme/nvme_cuse.o 00:04:21.122 CC lib/blob/blob_bs_dev.o 00:04:21.122 CC lib/init/json_config.o 00:04:21.380 CC lib/init/subsystem.o 00:04:21.380 CC lib/init/subsystem_rpc.o 00:04:21.380 CC lib/init/rpc.o 00:04:21.380 CC lib/nvme/nvme_rdma.o 00:04:21.639 LIB libspdk_init.a 00:04:21.639 SO libspdk_init.so.6.0 00:04:21.639 CC lib/fsdev/fsdev.o 00:04:21.639 CC lib/fsdev/fsdev_io.o 00:04:21.639 CC lib/virtio/virtio.o 00:04:21.639 SYMLINK libspdk_init.so 00:04:21.639 CC lib/fsdev/fsdev_rpc.o 00:04:21.897 CC lib/virtio/virtio_vhost_user.o 00:04:21.897 CC lib/virtio/virtio_vfio_user.o 00:04:22.156 CC lib/virtio/virtio_pci.o 00:04:22.414 CC lib/event/app.o 00:04:22.414 CC lib/event/reactor.o 00:04:22.414 CC lib/event/log_rpc.o 00:04:22.414 CC lib/event/app_rpc.o 00:04:22.414 LIB libspdk_accel.a 00:04:22.414 SO libspdk_accel.so.16.0 00:04:22.414 LIB libspdk_fsdev.a 00:04:22.672 CC lib/event/scheduler_static.o 00:04:22.672 SYMLINK libspdk_accel.so 00:04:22.672 SO libspdk_fsdev.so.2.0 00:04:22.672 LIB libspdk_virtio.a 00:04:22.672 SO libspdk_virtio.so.7.0 00:04:22.672 SYMLINK libspdk_fsdev.so 00:04:22.672 CC lib/bdev/bdev.o 00:04:22.672 CC lib/bdev/bdev_rpc.o 00:04:22.672 CC lib/bdev/bdev_zone.o 00:04:22.672 SYMLINK libspdk_virtio.so 00:04:22.672 CC lib/bdev/part.o 00:04:22.931 CC lib/bdev/scsi_nvme.o 00:04:22.931 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:23.189 LIB libspdk_nvme.a 00:04:23.448 LIB libspdk_event.a 00:04:23.448 SO libspdk_event.so.14.0 00:04:23.448 SO libspdk_nvme.so.15.0 00:04:23.707 SYMLINK libspdk_event.so 00:04:23.707 LIB libspdk_fuse_dispatcher.a 00:04:23.707 SO libspdk_fuse_dispatcher.so.1.0 00:04:23.707 SYMLINK libspdk_nvme.so 00:04:23.965 SYMLINK libspdk_fuse_dispatcher.so 00:04:26.497 LIB libspdk_blob.a 00:04:26.497 SO libspdk_blob.so.11.0 00:04:26.497 SYMLINK libspdk_blob.so 00:04:26.766 CC lib/lvol/lvol.o 00:04:26.766 CC lib/blobfs/blobfs.o 00:04:26.766 CC lib/blobfs/tree.o 00:04:26.766 LIB libspdk_bdev.a 00:04:26.766 SO libspdk_bdev.so.17.0 00:04:27.037 SYMLINK libspdk_bdev.so 00:04:27.296 CC lib/ublk/ublk.o 00:04:27.296 CC lib/ublk/ublk_rpc.o 00:04:27.296 CC lib/scsi/dev.o 00:04:27.296 CC lib/scsi/lun.o 00:04:27.296 CC lib/scsi/port.o 00:04:27.296 CC lib/nvmf/ctrlr.o 00:04:27.296 CC lib/ftl/ftl_core.o 00:04:27.296 CC lib/nbd/nbd.o 00:04:27.554 CC lib/nbd/nbd_rpc.o 00:04:27.554 CC lib/nvmf/ctrlr_discovery.o 00:04:27.554 CC lib/scsi/scsi.o 00:04:27.813 CC lib/scsi/scsi_bdev.o 00:04:27.813 CC lib/ftl/ftl_init.o 00:04:27.814 CC lib/ftl/ftl_layout.o 00:04:27.814 CC lib/ftl/ftl_debug.o 00:04:27.814 LIB libspdk_nbd.a 00:04:28.072 SO libspdk_nbd.so.7.0 00:04:28.072 LIB libspdk_blobfs.a 00:04:28.072 CC lib/nvmf/ctrlr_bdev.o 00:04:28.072 SYMLINK libspdk_nbd.so 00:04:28.072 CC lib/nvmf/subsystem.o 00:04:28.072 SO libspdk_blobfs.so.10.0 00:04:28.072 SYMLINK libspdk_blobfs.so 00:04:28.072 CC lib/nvmf/nvmf.o 00:04:28.072 CC lib/ftl/ftl_io.o 00:04:28.332 LIB libspdk_lvol.a 00:04:28.332 SO libspdk_lvol.so.10.0 00:04:28.332 CC lib/scsi/scsi_pr.o 00:04:28.332 SYMLINK libspdk_lvol.so 00:04:28.332 CC lib/scsi/scsi_rpc.o 00:04:28.332 CC lib/nvmf/nvmf_rpc.o 00:04:28.590 CC lib/ftl/ftl_sb.o 00:04:28.590 CC lib/ftl/ftl_l2p.o 00:04:28.590 CC lib/ftl/ftl_l2p_flat.o 00:04:28.850 CC lib/scsi/task.o 00:04:28.850 LIB libspdk_ublk.a 00:04:28.850 CC lib/ftl/ftl_nv_cache.o 00:04:28.850 SO libspdk_ublk.so.3.0 00:04:28.850 CC lib/ftl/ftl_band.o 00:04:28.850 SYMLINK libspdk_ublk.so 00:04:28.850 CC lib/ftl/ftl_band_ops.o 00:04:28.850 CC lib/nvmf/transport.o 00:04:29.108 LIB libspdk_scsi.a 00:04:29.108 CC lib/nvmf/tcp.o 00:04:29.108 SO libspdk_scsi.so.9.0 00:04:29.368 SYMLINK libspdk_scsi.so 00:04:29.368 CC lib/nvmf/stubs.o 00:04:29.368 CC lib/nvmf/mdns_server.o 00:04:29.627 CC lib/ftl/ftl_writer.o 00:04:29.886 CC lib/nvmf/rdma.o 00:04:29.886 CC lib/iscsi/conn.o 00:04:30.145 CC lib/iscsi/init_grp.o 00:04:30.145 CC lib/iscsi/iscsi.o 00:04:30.145 CC lib/vhost/vhost.o 00:04:30.403 CC lib/vhost/vhost_rpc.o 00:04:30.403 CC lib/nvmf/auth.o 00:04:30.403 CC lib/iscsi/param.o 00:04:30.665 CC lib/ftl/ftl_rq.o 00:04:30.665 CC lib/ftl/ftl_reloc.o 00:04:30.995 CC lib/ftl/ftl_l2p_cache.o 00:04:30.995 CC lib/ftl/ftl_p2l.o 00:04:30.995 CC lib/ftl/ftl_p2l_log.o 00:04:30.995 CC lib/ftl/mngt/ftl_mngt.o 00:04:31.254 CC lib/iscsi/portal_grp.o 00:04:31.254 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:31.254 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:31.254 CC lib/vhost/vhost_scsi.o 00:04:31.514 CC lib/vhost/vhost_blk.o 00:04:31.514 CC lib/vhost/rte_vhost_user.o 00:04:31.514 CC lib/iscsi/tgt_node.o 00:04:31.514 CC lib/iscsi/iscsi_subsystem.o 00:04:31.514 CC lib/iscsi/iscsi_rpc.o 00:04:31.514 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:31.775 CC lib/iscsi/task.o 00:04:31.775 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:31.775 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:32.034 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:32.034 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:32.034 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:32.034 LIB libspdk_iscsi.a 00:04:32.293 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:32.293 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:32.293 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:32.293 SO libspdk_iscsi.so.8.0 00:04:32.293 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:32.552 SYMLINK libspdk_iscsi.so 00:04:32.552 CC lib/ftl/utils/ftl_conf.o 00:04:32.552 CC lib/ftl/utils/ftl_md.o 00:04:32.552 CC lib/ftl/utils/ftl_mempool.o 00:04:32.552 CC lib/ftl/utils/ftl_bitmap.o 00:04:32.552 CC lib/ftl/utils/ftl_property.o 00:04:32.552 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:32.552 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:32.811 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:32.811 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:32.811 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:32.811 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:32.811 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:32.811 LIB libspdk_vhost.a 00:04:32.811 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:32.811 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:32.811 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:33.070 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:33.070 SO libspdk_vhost.so.8.0 00:04:33.070 LIB libspdk_nvmf.a 00:04:33.070 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:33.070 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:33.070 CC lib/ftl/base/ftl_base_dev.o 00:04:33.070 CC lib/ftl/base/ftl_base_bdev.o 00:04:33.070 SYMLINK libspdk_vhost.so 00:04:33.070 CC lib/ftl/ftl_trace.o 00:04:33.330 SO libspdk_nvmf.so.20.0 00:04:33.589 LIB libspdk_ftl.a 00:04:33.589 SYMLINK libspdk_nvmf.so 00:04:33.847 SO libspdk_ftl.so.9.0 00:04:34.106 SYMLINK libspdk_ftl.so 00:04:34.366 CC module/env_dpdk/env_dpdk_rpc.o 00:04:34.625 CC module/accel/ioat/accel_ioat.o 00:04:34.625 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:34.625 CC module/scheduler/gscheduler/gscheduler.o 00:04:34.625 CC module/accel/error/accel_error.o 00:04:34.625 CC module/fsdev/aio/fsdev_aio.o 00:04:34.625 CC module/sock/posix/posix.o 00:04:34.625 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:34.625 CC module/blob/bdev/blob_bdev.o 00:04:34.625 CC module/keyring/file/keyring.o 00:04:34.625 LIB libspdk_env_dpdk_rpc.a 00:04:34.625 SO libspdk_env_dpdk_rpc.so.6.0 00:04:34.883 LIB libspdk_scheduler_dpdk_governor.a 00:04:34.883 SYMLINK libspdk_env_dpdk_rpc.so 00:04:34.883 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:34.883 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:34.883 CC module/accel/ioat/accel_ioat_rpc.o 00:04:34.883 LIB libspdk_scheduler_dynamic.a 00:04:34.883 LIB libspdk_scheduler_gscheduler.a 00:04:34.883 SO libspdk_scheduler_dynamic.so.4.0 00:04:34.883 CC module/keyring/file/keyring_rpc.o 00:04:34.883 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:34.883 CC module/accel/error/accel_error_rpc.o 00:04:34.883 SO libspdk_scheduler_gscheduler.so.4.0 00:04:34.883 SYMLINK libspdk_scheduler_dynamic.so 00:04:34.884 CC module/fsdev/aio/linux_aio_mgr.o 00:04:34.884 LIB libspdk_blob_bdev.a 00:04:34.884 SYMLINK libspdk_scheduler_gscheduler.so 00:04:35.143 SO libspdk_blob_bdev.so.11.0 00:04:35.143 LIB libspdk_accel_error.a 00:04:35.143 LIB libspdk_keyring_file.a 00:04:35.143 LIB libspdk_accel_ioat.a 00:04:35.143 SO libspdk_accel_error.so.2.0 00:04:35.143 SO libspdk_keyring_file.so.2.0 00:04:35.143 CC module/keyring/linux/keyring.o 00:04:35.143 SYMLINK libspdk_blob_bdev.so 00:04:35.143 SO libspdk_accel_ioat.so.6.0 00:04:35.143 CC module/keyring/linux/keyring_rpc.o 00:04:35.143 SYMLINK libspdk_accel_error.so 00:04:35.143 CC module/accel/iaa/accel_iaa.o 00:04:35.143 SYMLINK libspdk_keyring_file.so 00:04:35.143 CC module/accel/dsa/accel_dsa.o 00:04:35.143 CC module/accel/iaa/accel_iaa_rpc.o 00:04:35.402 SYMLINK libspdk_accel_ioat.so 00:04:35.402 CC module/accel/dsa/accel_dsa_rpc.o 00:04:35.402 LIB libspdk_keyring_linux.a 00:04:35.402 SO libspdk_keyring_linux.so.1.0 00:04:35.402 LIB libspdk_accel_iaa.a 00:04:35.661 SO libspdk_accel_iaa.so.3.0 00:04:35.661 SYMLINK libspdk_keyring_linux.so 00:04:35.661 SYMLINK libspdk_accel_iaa.so 00:04:35.661 CC module/bdev/delay/vbdev_delay.o 00:04:35.661 CC module/blobfs/bdev/blobfs_bdev.o 00:04:35.661 CC module/bdev/error/vbdev_error.o 00:04:35.661 LIB libspdk_accel_dsa.a 00:04:35.661 CC module/bdev/gpt/gpt.o 00:04:35.661 CC module/bdev/lvol/vbdev_lvol.o 00:04:35.661 SO libspdk_accel_dsa.so.5.0 00:04:35.921 CC module/bdev/null/bdev_null.o 00:04:35.921 CC module/bdev/malloc/bdev_malloc.o 00:04:35.921 SYMLINK libspdk_accel_dsa.so 00:04:35.921 CC module/bdev/gpt/vbdev_gpt.o 00:04:35.921 LIB libspdk_sock_posix.a 00:04:35.921 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:35.921 LIB libspdk_fsdev_aio.a 00:04:35.921 SO libspdk_sock_posix.so.6.0 00:04:35.921 SO libspdk_fsdev_aio.so.1.0 00:04:35.921 CC module/bdev/error/vbdev_error_rpc.o 00:04:35.921 SYMLINK libspdk_sock_posix.so 00:04:35.921 CC module/bdev/null/bdev_null_rpc.o 00:04:35.921 SYMLINK libspdk_fsdev_aio.so 00:04:35.921 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:36.180 LIB libspdk_blobfs_bdev.a 00:04:36.180 SO libspdk_blobfs_bdev.so.6.0 00:04:36.180 LIB libspdk_bdev_error.a 00:04:36.180 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:36.180 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:36.180 SYMLINK libspdk_blobfs_bdev.so 00:04:36.180 CC module/bdev/nvme/bdev_nvme.o 00:04:36.180 SO libspdk_bdev_error.so.6.0 00:04:36.180 LIB libspdk_bdev_null.a 00:04:36.180 LIB libspdk_bdev_gpt.a 00:04:36.180 LIB libspdk_bdev_delay.a 00:04:36.180 SO libspdk_bdev_null.so.6.0 00:04:36.180 SO libspdk_bdev_gpt.so.6.0 00:04:36.180 SO libspdk_bdev_delay.so.6.0 00:04:36.180 SYMLINK libspdk_bdev_error.so 00:04:36.180 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:36.180 CC module/bdev/nvme/nvme_rpc.o 00:04:36.439 SYMLINK libspdk_bdev_gpt.so 00:04:36.439 SYMLINK libspdk_bdev_null.so 00:04:36.439 CC module/bdev/nvme/bdev_mdns_client.o 00:04:36.439 CC module/bdev/nvme/vbdev_opal.o 00:04:36.439 SYMLINK libspdk_bdev_delay.so 00:04:36.439 LIB libspdk_bdev_malloc.a 00:04:36.439 CC module/bdev/passthru/vbdev_passthru.o 00:04:36.439 SO libspdk_bdev_malloc.so.6.0 00:04:36.439 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:36.439 SYMLINK libspdk_bdev_malloc.so 00:04:36.439 CC module/bdev/raid/bdev_raid.o 00:04:36.439 CC module/bdev/raid/bdev_raid_rpc.o 00:04:36.699 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:36.699 LIB libspdk_bdev_lvol.a 00:04:36.699 CC module/bdev/split/vbdev_split.o 00:04:36.699 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:36.699 CC module/bdev/split/vbdev_split_rpc.o 00:04:36.699 SO libspdk_bdev_lvol.so.6.0 00:04:36.699 LIB libspdk_bdev_passthru.a 00:04:36.699 SYMLINK libspdk_bdev_lvol.so 00:04:36.699 SO libspdk_bdev_passthru.so.6.0 00:04:36.958 SYMLINK libspdk_bdev_passthru.so 00:04:36.958 CC module/bdev/raid/bdev_raid_sb.o 00:04:36.958 CC module/bdev/raid/raid0.o 00:04:36.958 LIB libspdk_bdev_split.a 00:04:36.958 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:36.958 SO libspdk_bdev_split.so.6.0 00:04:36.958 CC module/bdev/aio/bdev_aio.o 00:04:36.958 CC module/bdev/ftl/bdev_ftl.o 00:04:37.218 SYMLINK libspdk_bdev_split.so 00:04:37.218 CC module/bdev/iscsi/bdev_iscsi.o 00:04:37.218 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:37.218 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:37.218 CC module/bdev/raid/raid1.o 00:04:37.218 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:37.477 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:37.477 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:37.477 LIB libspdk_bdev_zone_block.a 00:04:37.477 SO libspdk_bdev_zone_block.so.6.0 00:04:37.477 CC module/bdev/aio/bdev_aio_rpc.o 00:04:37.477 LIB libspdk_bdev_ftl.a 00:04:37.477 CC module/bdev/raid/concat.o 00:04:37.477 SYMLINK libspdk_bdev_zone_block.so 00:04:37.477 CC module/bdev/raid/raid5f.o 00:04:37.477 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:37.477 SO libspdk_bdev_ftl.so.6.0 00:04:37.736 SYMLINK libspdk_bdev_ftl.so 00:04:37.736 LIB libspdk_bdev_aio.a 00:04:37.736 SO libspdk_bdev_aio.so.6.0 00:04:37.736 LIB libspdk_bdev_iscsi.a 00:04:37.736 SO libspdk_bdev_iscsi.so.6.0 00:04:37.736 SYMLINK libspdk_bdev_aio.so 00:04:37.736 SYMLINK libspdk_bdev_iscsi.so 00:04:37.995 LIB libspdk_bdev_virtio.a 00:04:37.995 SO libspdk_bdev_virtio.so.6.0 00:04:37.995 SYMLINK libspdk_bdev_virtio.so 00:04:38.255 LIB libspdk_bdev_raid.a 00:04:38.255 SO libspdk_bdev_raid.so.6.0 00:04:38.255 SYMLINK libspdk_bdev_raid.so 00:04:39.634 LIB libspdk_bdev_nvme.a 00:04:39.634 SO libspdk_bdev_nvme.so.7.1 00:04:39.893 SYMLINK libspdk_bdev_nvme.so 00:04:40.469 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:40.469 CC module/event/subsystems/iobuf/iobuf.o 00:04:40.469 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:40.469 CC module/event/subsystems/keyring/keyring.o 00:04:40.469 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:40.469 CC module/event/subsystems/vmd/vmd.o 00:04:40.469 CC module/event/subsystems/scheduler/scheduler.o 00:04:40.469 CC module/event/subsystems/sock/sock.o 00:04:40.469 CC module/event/subsystems/fsdev/fsdev.o 00:04:40.469 LIB libspdk_event_keyring.a 00:04:40.728 LIB libspdk_event_vhost_blk.a 00:04:40.728 SO libspdk_event_keyring.so.1.0 00:04:40.728 LIB libspdk_event_scheduler.a 00:04:40.728 SO libspdk_event_vhost_blk.so.3.0 00:04:40.728 LIB libspdk_event_fsdev.a 00:04:40.728 SO libspdk_event_scheduler.so.4.0 00:04:40.728 LIB libspdk_event_iobuf.a 00:04:40.728 LIB libspdk_event_sock.a 00:04:40.728 SO libspdk_event_fsdev.so.1.0 00:04:40.728 LIB libspdk_event_vmd.a 00:04:40.728 SYMLINK libspdk_event_keyring.so 00:04:40.728 SO libspdk_event_iobuf.so.3.0 00:04:40.728 SO libspdk_event_sock.so.5.0 00:04:40.728 SYMLINK libspdk_event_vhost_blk.so 00:04:40.728 SO libspdk_event_vmd.so.6.0 00:04:40.728 SYMLINK libspdk_event_scheduler.so 00:04:40.728 SYMLINK libspdk_event_fsdev.so 00:04:40.728 SYMLINK libspdk_event_iobuf.so 00:04:40.728 SYMLINK libspdk_event_sock.so 00:04:40.728 SYMLINK libspdk_event_vmd.so 00:04:40.986 CC module/event/subsystems/accel/accel.o 00:04:41.245 LIB libspdk_event_accel.a 00:04:41.245 SO libspdk_event_accel.so.6.0 00:04:41.245 SYMLINK libspdk_event_accel.so 00:04:41.503 CC module/event/subsystems/bdev/bdev.o 00:04:41.762 LIB libspdk_event_bdev.a 00:04:41.762 SO libspdk_event_bdev.so.6.0 00:04:41.762 SYMLINK libspdk_event_bdev.so 00:04:42.021 CC module/event/subsystems/nbd/nbd.o 00:04:42.021 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:42.021 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:42.021 CC module/event/subsystems/scsi/scsi.o 00:04:42.021 CC module/event/subsystems/ublk/ublk.o 00:04:42.280 LIB libspdk_event_nbd.a 00:04:42.280 LIB libspdk_event_scsi.a 00:04:42.280 LIB libspdk_event_ublk.a 00:04:42.280 SO libspdk_event_nbd.so.6.0 00:04:42.280 SO libspdk_event_scsi.so.6.0 00:04:42.280 SO libspdk_event_ublk.so.3.0 00:04:42.280 SYMLINK libspdk_event_nbd.so 00:04:42.280 SYMLINK libspdk_event_scsi.so 00:04:42.280 SYMLINK libspdk_event_ublk.so 00:04:42.280 LIB libspdk_event_nvmf.a 00:04:42.539 SO libspdk_event_nvmf.so.6.0 00:04:42.539 SYMLINK libspdk_event_nvmf.so 00:04:42.539 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:42.539 CC module/event/subsystems/iscsi/iscsi.o 00:04:42.798 LIB libspdk_event_vhost_scsi.a 00:04:42.798 SO libspdk_event_vhost_scsi.so.3.0 00:04:42.798 LIB libspdk_event_iscsi.a 00:04:42.798 SO libspdk_event_iscsi.so.6.0 00:04:43.057 SYMLINK libspdk_event_vhost_scsi.so 00:04:43.057 SYMLINK libspdk_event_iscsi.so 00:04:43.057 SO libspdk.so.6.0 00:04:43.057 SYMLINK libspdk.so 00:04:43.316 CXX app/trace/trace.o 00:04:43.316 CC test/rpc_client/rpc_client_test.o 00:04:43.316 TEST_HEADER include/spdk/accel.h 00:04:43.316 TEST_HEADER include/spdk/accel_module.h 00:04:43.316 TEST_HEADER include/spdk/assert.h 00:04:43.316 TEST_HEADER include/spdk/barrier.h 00:04:43.316 TEST_HEADER include/spdk/base64.h 00:04:43.316 TEST_HEADER include/spdk/bdev.h 00:04:43.316 TEST_HEADER include/spdk/bdev_module.h 00:04:43.316 TEST_HEADER include/spdk/bdev_zone.h 00:04:43.316 TEST_HEADER include/spdk/bit_array.h 00:04:43.316 TEST_HEADER include/spdk/bit_pool.h 00:04:43.316 TEST_HEADER include/spdk/blob_bdev.h 00:04:43.316 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:43.316 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:43.316 TEST_HEADER include/spdk/blobfs.h 00:04:43.316 TEST_HEADER include/spdk/blob.h 00:04:43.316 TEST_HEADER include/spdk/conf.h 00:04:43.316 TEST_HEADER include/spdk/config.h 00:04:43.316 TEST_HEADER include/spdk/cpuset.h 00:04:43.316 TEST_HEADER include/spdk/crc16.h 00:04:43.316 TEST_HEADER include/spdk/crc32.h 00:04:43.316 TEST_HEADER include/spdk/crc64.h 00:04:43.316 TEST_HEADER include/spdk/dif.h 00:04:43.316 TEST_HEADER include/spdk/dma.h 00:04:43.576 TEST_HEADER include/spdk/endian.h 00:04:43.576 TEST_HEADER include/spdk/env_dpdk.h 00:04:43.576 TEST_HEADER include/spdk/env.h 00:04:43.576 TEST_HEADER include/spdk/event.h 00:04:43.576 TEST_HEADER include/spdk/fd_group.h 00:04:43.576 TEST_HEADER include/spdk/fd.h 00:04:43.576 TEST_HEADER include/spdk/file.h 00:04:43.576 TEST_HEADER include/spdk/fsdev.h 00:04:43.576 TEST_HEADER include/spdk/fsdev_module.h 00:04:43.576 TEST_HEADER include/spdk/ftl.h 00:04:43.576 CC examples/ioat/perf/perf.o 00:04:43.576 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:43.576 TEST_HEADER include/spdk/gpt_spec.h 00:04:43.576 TEST_HEADER include/spdk/hexlify.h 00:04:43.576 TEST_HEADER include/spdk/histogram_data.h 00:04:43.576 CC examples/util/zipf/zipf.o 00:04:43.576 TEST_HEADER include/spdk/idxd.h 00:04:43.576 TEST_HEADER include/spdk/idxd_spec.h 00:04:43.576 TEST_HEADER include/spdk/init.h 00:04:43.576 TEST_HEADER include/spdk/ioat.h 00:04:43.576 TEST_HEADER include/spdk/ioat_spec.h 00:04:43.576 TEST_HEADER include/spdk/iscsi_spec.h 00:04:43.576 TEST_HEADER include/spdk/json.h 00:04:43.576 TEST_HEADER include/spdk/jsonrpc.h 00:04:43.576 TEST_HEADER include/spdk/keyring.h 00:04:43.576 TEST_HEADER include/spdk/keyring_module.h 00:04:43.576 TEST_HEADER include/spdk/likely.h 00:04:43.576 CC test/thread/poller_perf/poller_perf.o 00:04:43.576 TEST_HEADER include/spdk/log.h 00:04:43.576 TEST_HEADER include/spdk/lvol.h 00:04:43.576 TEST_HEADER include/spdk/md5.h 00:04:43.576 CC test/app/bdev_svc/bdev_svc.o 00:04:43.576 TEST_HEADER include/spdk/memory.h 00:04:43.576 TEST_HEADER include/spdk/mmio.h 00:04:43.576 TEST_HEADER include/spdk/nbd.h 00:04:43.576 TEST_HEADER include/spdk/net.h 00:04:43.576 TEST_HEADER include/spdk/notify.h 00:04:43.576 CC test/dma/test_dma/test_dma.o 00:04:43.576 TEST_HEADER include/spdk/nvme.h 00:04:43.576 TEST_HEADER include/spdk/nvme_intel.h 00:04:43.576 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:43.576 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:43.576 TEST_HEADER include/spdk/nvme_spec.h 00:04:43.576 TEST_HEADER include/spdk/nvme_zns.h 00:04:43.576 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:43.576 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:43.576 TEST_HEADER include/spdk/nvmf.h 00:04:43.576 TEST_HEADER include/spdk/nvmf_spec.h 00:04:43.576 TEST_HEADER include/spdk/nvmf_transport.h 00:04:43.576 TEST_HEADER include/spdk/opal.h 00:04:43.576 TEST_HEADER include/spdk/opal_spec.h 00:04:43.576 TEST_HEADER include/spdk/pci_ids.h 00:04:43.576 TEST_HEADER include/spdk/pipe.h 00:04:43.576 TEST_HEADER include/spdk/queue.h 00:04:43.576 TEST_HEADER include/spdk/reduce.h 00:04:43.576 TEST_HEADER include/spdk/rpc.h 00:04:43.576 TEST_HEADER include/spdk/scheduler.h 00:04:43.576 TEST_HEADER include/spdk/scsi.h 00:04:43.576 TEST_HEADER include/spdk/scsi_spec.h 00:04:43.576 TEST_HEADER include/spdk/sock.h 00:04:43.576 TEST_HEADER include/spdk/stdinc.h 00:04:43.576 CC test/env/mem_callbacks/mem_callbacks.o 00:04:43.576 TEST_HEADER include/spdk/string.h 00:04:43.576 TEST_HEADER include/spdk/thread.h 00:04:43.576 TEST_HEADER include/spdk/trace.h 00:04:43.576 TEST_HEADER include/spdk/trace_parser.h 00:04:43.576 TEST_HEADER include/spdk/tree.h 00:04:43.576 TEST_HEADER include/spdk/ublk.h 00:04:43.576 TEST_HEADER include/spdk/util.h 00:04:43.576 TEST_HEADER include/spdk/uuid.h 00:04:43.576 TEST_HEADER include/spdk/version.h 00:04:43.576 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:43.576 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:43.576 TEST_HEADER include/spdk/vhost.h 00:04:43.576 TEST_HEADER include/spdk/vmd.h 00:04:43.576 TEST_HEADER include/spdk/xor.h 00:04:43.576 TEST_HEADER include/spdk/zipf.h 00:04:43.576 CXX test/cpp_headers/accel.o 00:04:43.576 LINK rpc_client_test 00:04:43.576 LINK interrupt_tgt 00:04:43.834 LINK poller_perf 00:04:43.835 LINK zipf 00:04:43.835 LINK bdev_svc 00:04:43.835 LINK ioat_perf 00:04:43.835 CXX test/cpp_headers/accel_module.o 00:04:43.835 CXX test/cpp_headers/assert.o 00:04:43.835 LINK spdk_trace 00:04:44.093 CC test/env/vtophys/vtophys.o 00:04:44.093 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:44.093 CC examples/ioat/verify/verify.o 00:04:44.093 CXX test/cpp_headers/barrier.o 00:04:44.093 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:44.093 CC test/app/histogram_perf/histogram_perf.o 00:04:44.093 LINK vtophys 00:04:44.093 LINK env_dpdk_post_init 00:04:44.093 CC test/event/event_perf/event_perf.o 00:04:44.351 CC app/trace_record/trace_record.o 00:04:44.351 LINK test_dma 00:04:44.351 LINK mem_callbacks 00:04:44.351 CXX test/cpp_headers/base64.o 00:04:44.351 LINK histogram_perf 00:04:44.351 LINK verify 00:04:44.351 LINK event_perf 00:04:44.351 CC test/env/memory/memory_ut.o 00:04:44.351 CXX test/cpp_headers/bdev.o 00:04:44.610 CC test/app/jsoncat/jsoncat.o 00:04:44.610 CC test/app/stub/stub.o 00:04:44.610 LINK spdk_trace_record 00:04:44.610 CC test/env/pci/pci_ut.o 00:04:44.610 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:44.610 LINK jsoncat 00:04:44.610 CXX test/cpp_headers/bdev_module.o 00:04:44.610 CC test/event/reactor/reactor.o 00:04:44.610 LINK nvme_fuzz 00:04:44.610 CC examples/thread/thread/thread_ex.o 00:04:44.869 LINK stub 00:04:44.869 LINK reactor 00:04:44.869 CXX test/cpp_headers/bdev_zone.o 00:04:44.869 CC app/nvmf_tgt/nvmf_main.o 00:04:45.127 CC app/iscsi_tgt/iscsi_tgt.o 00:04:45.127 CC test/event/reactor_perf/reactor_perf.o 00:04:45.127 LINK thread 00:04:45.127 CXX test/cpp_headers/bit_array.o 00:04:45.127 LINK reactor_perf 00:04:45.385 LINK nvmf_tgt 00:04:45.385 LINK iscsi_tgt 00:04:45.385 LINK pci_ut 00:04:45.385 CC examples/sock/hello_world/hello_sock.o 00:04:45.385 CXX test/cpp_headers/bit_pool.o 00:04:45.644 CC test/accel/dif/dif.o 00:04:45.644 CC test/event/app_repeat/app_repeat.o 00:04:45.644 CC test/blobfs/mkfs/mkfs.o 00:04:45.644 CC test/event/scheduler/scheduler.o 00:04:45.644 LINK hello_sock 00:04:45.644 CXX test/cpp_headers/blob_bdev.o 00:04:45.644 CXX test/cpp_headers/blobfs_bdev.o 00:04:45.903 LINK app_repeat 00:04:45.903 CC app/spdk_tgt/spdk_tgt.o 00:04:45.903 LINK memory_ut 00:04:45.903 LINK scheduler 00:04:45.903 LINK mkfs 00:04:46.162 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:46.162 CXX test/cpp_headers/blobfs.o 00:04:46.162 CC examples/vmd/lsvmd/lsvmd.o 00:04:46.162 LINK spdk_tgt 00:04:46.162 CC examples/vmd/led/led.o 00:04:46.162 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:46.162 LINK lsvmd 00:04:46.162 CXX test/cpp_headers/blob.o 00:04:46.162 LINK led 00:04:46.420 CXX test/cpp_headers/conf.o 00:04:46.420 CC app/spdk_lspci/spdk_lspci.o 00:04:46.420 LINK dif 00:04:46.420 CC test/lvol/esnap/esnap.o 00:04:46.420 LINK spdk_lspci 00:04:46.420 CXX test/cpp_headers/config.o 00:04:46.679 CC test/nvme/aer/aer.o 00:04:46.679 CXX test/cpp_headers/cpuset.o 00:04:46.679 CC examples/idxd/perf/perf.o 00:04:46.679 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:46.679 CC examples/accel/perf/accel_perf.o 00:04:46.679 CXX test/cpp_headers/crc16.o 00:04:46.679 CC app/spdk_nvme_perf/perf.o 00:04:46.679 LINK vhost_fuzz 00:04:46.679 CC app/spdk_nvme_identify/identify.o 00:04:46.938 LINK iscsi_fuzz 00:04:46.938 CXX test/cpp_headers/crc32.o 00:04:46.938 LINK aer 00:04:46.938 LINK hello_fsdev 00:04:47.196 LINK idxd_perf 00:04:47.196 CXX test/cpp_headers/crc64.o 00:04:47.196 CC test/bdev/bdevio/bdevio.o 00:04:47.196 CC app/spdk_nvme_discover/discovery_aer.o 00:04:47.196 CC test/nvme/reset/reset.o 00:04:47.455 CXX test/cpp_headers/dif.o 00:04:47.455 LINK accel_perf 00:04:47.455 CC examples/nvme/hello_world/hello_world.o 00:04:47.455 CC examples/blob/hello_world/hello_blob.o 00:04:47.455 LINK spdk_nvme_discover 00:04:47.455 CXX test/cpp_headers/dma.o 00:04:47.714 LINK reset 00:04:47.714 CC examples/nvme/reconnect/reconnect.o 00:04:47.714 LINK bdevio 00:04:47.714 CXX test/cpp_headers/endian.o 00:04:47.714 LINK hello_world 00:04:47.714 LINK hello_blob 00:04:47.972 LINK spdk_nvme_perf 00:04:47.972 CC test/nvme/sgl/sgl.o 00:04:47.972 CC app/spdk_top/spdk_top.o 00:04:47.972 LINK spdk_nvme_identify 00:04:47.972 CXX test/cpp_headers/env_dpdk.o 00:04:47.972 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:48.231 CC examples/nvme/arbitration/arbitration.o 00:04:48.231 CXX test/cpp_headers/env.o 00:04:48.231 CC examples/blob/cli/blobcli.o 00:04:48.231 LINK reconnect 00:04:48.231 CC examples/nvme/hotplug/hotplug.o 00:04:48.231 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:48.231 LINK sgl 00:04:48.231 CXX test/cpp_headers/event.o 00:04:48.490 LINK cmb_copy 00:04:48.490 LINK arbitration 00:04:48.490 CXX test/cpp_headers/fd_group.o 00:04:48.490 CC test/nvme/e2edp/nvme_dp.o 00:04:48.490 CC test/nvme/overhead/overhead.o 00:04:48.490 LINK hotplug 00:04:48.748 LINK nvme_manage 00:04:48.748 CC examples/nvme/abort/abort.o 00:04:48.748 CXX test/cpp_headers/fd.o 00:04:48.748 LINK blobcli 00:04:49.007 CC app/vhost/vhost.o 00:04:49.007 LINK nvme_dp 00:04:49.007 CC app/spdk_dd/spdk_dd.o 00:04:49.007 CXX test/cpp_headers/file.o 00:04:49.007 LINK overhead 00:04:49.007 CC app/fio/nvme/fio_plugin.o 00:04:49.007 LINK vhost 00:04:49.265 CC app/fio/bdev/fio_plugin.o 00:04:49.265 CXX test/cpp_headers/fsdev.o 00:04:49.265 LINK spdk_top 00:04:49.265 LINK abort 00:04:49.265 CC examples/bdev/hello_world/hello_bdev.o 00:04:49.265 CC test/nvme/err_injection/err_injection.o 00:04:49.265 CXX test/cpp_headers/fsdev_module.o 00:04:49.524 LINK spdk_dd 00:04:49.524 CC test/nvme/startup/startup.o 00:04:49.524 CC examples/bdev/bdevperf/bdevperf.o 00:04:49.524 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:49.524 LINK err_injection 00:04:49.524 CXX test/cpp_headers/ftl.o 00:04:49.524 LINK hello_bdev 00:04:49.782 LINK startup 00:04:49.782 LINK pmr_persistence 00:04:49.782 CC test/nvme/reserve/reserve.o 00:04:49.782 LINK spdk_bdev 00:04:49.782 CC test/nvme/simple_copy/simple_copy.o 00:04:49.782 LINK spdk_nvme 00:04:49.782 CXX test/cpp_headers/fuse_dispatcher.o 00:04:49.782 CXX test/cpp_headers/gpt_spec.o 00:04:50.041 CXX test/cpp_headers/hexlify.o 00:04:50.041 CC test/nvme/connect_stress/connect_stress.o 00:04:50.041 LINK reserve 00:04:50.041 CC test/nvme/boot_partition/boot_partition.o 00:04:50.041 CC test/nvme/compliance/nvme_compliance.o 00:04:50.041 CXX test/cpp_headers/histogram_data.o 00:04:50.041 LINK simple_copy 00:04:50.299 LINK connect_stress 00:04:50.299 CC test/nvme/fused_ordering/fused_ordering.o 00:04:50.299 LINK boot_partition 00:04:50.299 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:50.299 CXX test/cpp_headers/idxd.o 00:04:50.299 CC test/nvme/fdp/fdp.o 00:04:50.299 CXX test/cpp_headers/idxd_spec.o 00:04:50.299 CXX test/cpp_headers/init.o 00:04:50.299 LINK fused_ordering 00:04:50.299 CC test/nvme/cuse/cuse.o 00:04:50.557 CXX test/cpp_headers/ioat.o 00:04:50.557 LINK nvme_compliance 00:04:50.557 LINK doorbell_aers 00:04:50.557 CXX test/cpp_headers/ioat_spec.o 00:04:50.557 CXX test/cpp_headers/iscsi_spec.o 00:04:50.557 CXX test/cpp_headers/json.o 00:04:50.557 CXX test/cpp_headers/jsonrpc.o 00:04:50.557 CXX test/cpp_headers/keyring.o 00:04:50.557 CXX test/cpp_headers/keyring_module.o 00:04:50.817 LINK bdevperf 00:04:50.817 LINK fdp 00:04:50.817 CXX test/cpp_headers/likely.o 00:04:50.817 CXX test/cpp_headers/log.o 00:04:50.817 CXX test/cpp_headers/lvol.o 00:04:50.817 CXX test/cpp_headers/md5.o 00:04:50.817 CXX test/cpp_headers/memory.o 00:04:50.817 CXX test/cpp_headers/mmio.o 00:04:50.817 CXX test/cpp_headers/nbd.o 00:04:50.817 CXX test/cpp_headers/net.o 00:04:50.817 CXX test/cpp_headers/notify.o 00:04:50.817 CXX test/cpp_headers/nvme.o 00:04:51.075 CXX test/cpp_headers/nvme_intel.o 00:04:51.075 CXX test/cpp_headers/nvme_ocssd.o 00:04:51.075 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:51.075 CXX test/cpp_headers/nvme_spec.o 00:04:51.075 CXX test/cpp_headers/nvme_zns.o 00:04:51.075 CXX test/cpp_headers/nvmf_cmd.o 00:04:51.075 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:51.075 CXX test/cpp_headers/nvmf.o 00:04:51.075 CC examples/nvmf/nvmf/nvmf.o 00:04:51.334 CXX test/cpp_headers/nvmf_spec.o 00:04:51.334 CXX test/cpp_headers/nvmf_transport.o 00:04:51.334 CXX test/cpp_headers/opal.o 00:04:51.334 CXX test/cpp_headers/opal_spec.o 00:04:51.334 CXX test/cpp_headers/pci_ids.o 00:04:51.334 CXX test/cpp_headers/pipe.o 00:04:51.334 CXX test/cpp_headers/queue.o 00:04:51.334 CXX test/cpp_headers/reduce.o 00:04:51.593 CXX test/cpp_headers/rpc.o 00:04:51.593 CXX test/cpp_headers/scheduler.o 00:04:51.593 CXX test/cpp_headers/scsi.o 00:04:51.593 CXX test/cpp_headers/scsi_spec.o 00:04:51.593 CXX test/cpp_headers/sock.o 00:04:51.593 LINK nvmf 00:04:51.593 CXX test/cpp_headers/stdinc.o 00:04:51.593 CXX test/cpp_headers/string.o 00:04:51.593 CXX test/cpp_headers/thread.o 00:04:51.593 CXX test/cpp_headers/trace.o 00:04:51.852 CXX test/cpp_headers/trace_parser.o 00:04:51.852 CXX test/cpp_headers/tree.o 00:04:51.852 CXX test/cpp_headers/ublk.o 00:04:51.852 CXX test/cpp_headers/util.o 00:04:51.852 CXX test/cpp_headers/uuid.o 00:04:51.852 CXX test/cpp_headers/version.o 00:04:51.852 CXX test/cpp_headers/vfio_user_pci.o 00:04:51.852 CXX test/cpp_headers/vfio_user_spec.o 00:04:51.852 CXX test/cpp_headers/vhost.o 00:04:51.852 CXX test/cpp_headers/vmd.o 00:04:51.852 CXX test/cpp_headers/xor.o 00:04:51.852 CXX test/cpp_headers/zipf.o 00:04:52.174 LINK cuse 00:04:54.097 LINK esnap 00:04:54.355 00:04:54.356 real 1m47.017s 00:04:54.356 user 10m2.529s 00:04:54.356 sys 1m56.083s 00:04:54.356 14:15:33 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:54.356 14:15:33 make -- common/autotest_common.sh@10 -- $ set +x 00:04:54.356 ************************************ 00:04:54.356 END TEST make 00:04:54.356 ************************************ 00:04:54.356 14:15:33 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:54.356 14:15:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:54.356 14:15:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:54.356 14:15:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:54.356 14:15:33 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:54.356 14:15:33 -- pm/common@44 -- $ pid=5242 00:04:54.356 14:15:33 -- pm/common@50 -- $ kill -TERM 5242 00:04:54.356 14:15:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:54.356 14:15:33 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:54.356 14:15:33 -- pm/common@44 -- $ pid=5244 00:04:54.356 14:15:33 -- pm/common@50 -- $ kill -TERM 5244 00:04:54.356 14:15:33 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:54.356 14:15:33 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:54.356 14:15:33 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:54.356 14:15:33 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:54.356 14:15:33 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:54.615 14:15:33 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:54.615 14:15:33 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.615 14:15:33 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.616 14:15:33 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.616 14:15:33 -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.616 14:15:33 -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.616 14:15:33 -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.616 14:15:33 -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.616 14:15:33 -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.616 14:15:33 -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.616 14:15:33 -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.616 14:15:33 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.616 14:15:33 -- scripts/common.sh@344 -- # case "$op" in 00:04:54.616 14:15:33 -- scripts/common.sh@345 -- # : 1 00:04:54.616 14:15:33 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.616 14:15:33 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.616 14:15:33 -- scripts/common.sh@365 -- # decimal 1 00:04:54.616 14:15:33 -- scripts/common.sh@353 -- # local d=1 00:04:54.616 14:15:33 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.616 14:15:33 -- scripts/common.sh@355 -- # echo 1 00:04:54.616 14:15:33 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.616 14:15:33 -- scripts/common.sh@366 -- # decimal 2 00:04:54.616 14:15:33 -- scripts/common.sh@353 -- # local d=2 00:04:54.616 14:15:33 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.616 14:15:33 -- scripts/common.sh@355 -- # echo 2 00:04:54.616 14:15:33 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.616 14:15:33 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.616 14:15:33 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.616 14:15:33 -- scripts/common.sh@368 -- # return 0 00:04:54.616 14:15:33 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.616 14:15:33 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:54.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.616 --rc genhtml_branch_coverage=1 00:04:54.616 --rc genhtml_function_coverage=1 00:04:54.616 --rc genhtml_legend=1 00:04:54.616 --rc geninfo_all_blocks=1 00:04:54.616 --rc geninfo_unexecuted_blocks=1 00:04:54.616 00:04:54.616 ' 00:04:54.616 14:15:33 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:54.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.616 --rc genhtml_branch_coverage=1 00:04:54.616 --rc genhtml_function_coverage=1 00:04:54.616 --rc genhtml_legend=1 00:04:54.616 --rc geninfo_all_blocks=1 00:04:54.616 --rc geninfo_unexecuted_blocks=1 00:04:54.616 00:04:54.616 ' 00:04:54.616 14:15:33 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:54.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.616 --rc genhtml_branch_coverage=1 00:04:54.616 --rc genhtml_function_coverage=1 00:04:54.616 --rc genhtml_legend=1 00:04:54.616 --rc geninfo_all_blocks=1 00:04:54.616 --rc geninfo_unexecuted_blocks=1 00:04:54.616 00:04:54.616 ' 00:04:54.616 14:15:33 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:54.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.616 --rc genhtml_branch_coverage=1 00:04:54.616 --rc genhtml_function_coverage=1 00:04:54.616 --rc genhtml_legend=1 00:04:54.616 --rc geninfo_all_blocks=1 00:04:54.616 --rc geninfo_unexecuted_blocks=1 00:04:54.616 00:04:54.616 ' 00:04:54.616 14:15:33 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:54.616 14:15:33 -- nvmf/common.sh@7 -- # uname -s 00:04:54.616 14:15:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:54.616 14:15:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:54.616 14:15:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:54.616 14:15:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:54.616 14:15:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:54.616 14:15:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:54.616 14:15:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:54.616 14:15:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:54.616 14:15:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:54.616 14:15:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:54.616 14:15:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:db59ceda-2696-4653-8c92-acb430fd34b6 00:04:54.616 14:15:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=db59ceda-2696-4653-8c92-acb430fd34b6 00:04:54.616 14:15:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:54.616 14:15:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:54.616 14:15:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:54.616 14:15:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:54.616 14:15:33 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:54.616 14:15:33 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:54.616 14:15:33 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:54.616 14:15:33 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:54.616 14:15:33 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:54.616 14:15:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.616 14:15:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.616 14:15:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.616 14:15:33 -- paths/export.sh@5 -- # export PATH 00:04:54.616 14:15:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.616 14:15:33 -- nvmf/common.sh@51 -- # : 0 00:04:54.616 14:15:33 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:54.616 14:15:33 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:54.616 14:15:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:54.616 14:15:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:54.616 14:15:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:54.616 14:15:33 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:54.617 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:54.617 14:15:33 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:54.617 14:15:33 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:54.617 14:15:33 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:54.617 14:15:33 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:54.617 14:15:33 -- spdk/autotest.sh@32 -- # uname -s 00:04:54.617 14:15:33 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:54.617 14:15:33 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:54.617 14:15:33 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:54.617 14:15:33 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:54.617 14:15:33 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:54.617 14:15:33 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:54.617 14:15:33 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:54.617 14:15:33 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:54.617 14:15:33 -- spdk/autotest.sh@48 -- # udevadm_pid=54422 00:04:54.617 14:15:33 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:54.617 14:15:33 -- pm/common@17 -- # local monitor 00:04:54.617 14:15:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:54.617 14:15:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:54.617 14:15:33 -- pm/common@25 -- # sleep 1 00:04:54.617 14:15:33 -- pm/common@21 -- # date +%s 00:04:54.617 14:15:33 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:54.617 14:15:33 -- pm/common@21 -- # date +%s 00:04:54.617 14:15:33 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732112133 00:04:54.617 14:15:33 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732112133 00:04:54.617 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732112133_collect-cpu-load.pm.log 00:04:54.617 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732112133_collect-vmstat.pm.log 00:04:55.994 14:15:34 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:55.994 14:15:34 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:55.994 14:15:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:55.994 14:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:55.994 14:15:34 -- spdk/autotest.sh@59 -- # create_test_list 00:04:55.994 14:15:34 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:55.994 14:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:55.994 14:15:34 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:55.994 14:15:34 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:55.994 14:15:34 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:55.994 14:15:34 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:55.994 14:15:34 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:55.994 14:15:34 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:55.994 14:15:34 -- common/autotest_common.sh@1457 -- # uname 00:04:55.994 14:15:34 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:55.994 14:15:34 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:55.994 14:15:34 -- common/autotest_common.sh@1477 -- # uname 00:04:55.994 14:15:34 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:55.994 14:15:34 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:55.994 14:15:34 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:55.994 lcov: LCOV version 1.15 00:04:55.995 14:15:34 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:14.115 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:14.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:32.272 14:16:09 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:32.272 14:16:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:32.272 14:16:09 -- common/autotest_common.sh@10 -- # set +x 00:05:32.272 14:16:09 -- spdk/autotest.sh@78 -- # rm -f 00:05:32.272 14:16:09 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:32.272 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:32.272 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:32.272 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:32.272 14:16:10 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:32.272 14:16:10 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:32.272 14:16:10 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:32.272 14:16:10 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:32.272 14:16:10 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:32.272 14:16:10 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:32.272 14:16:10 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:32.272 14:16:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:32.272 14:16:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:32.272 14:16:10 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:32.272 14:16:10 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:32.272 14:16:10 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:32.272 14:16:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:32.272 14:16:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:32.272 14:16:10 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:32.272 14:16:10 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:05:32.272 14:16:10 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:32.272 14:16:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:32.273 14:16:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:32.273 14:16:10 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:32.273 14:16:10 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:05:32.273 14:16:10 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:32.273 14:16:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:32.273 14:16:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:32.273 14:16:10 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:32.273 14:16:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:32.273 14:16:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:32.273 14:16:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:32.273 14:16:10 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:32.273 14:16:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:32.273 No valid GPT data, bailing 00:05:32.273 14:16:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:32.273 14:16:10 -- scripts/common.sh@394 -- # pt= 00:05:32.273 14:16:10 -- scripts/common.sh@395 -- # return 1 00:05:32.273 14:16:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:32.273 1+0 records in 00:05:32.273 1+0 records out 00:05:32.273 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00480785 s, 218 MB/s 00:05:32.273 14:16:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:32.273 14:16:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:32.273 14:16:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:32.273 14:16:10 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:32.273 14:16:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:32.273 No valid GPT data, bailing 00:05:32.273 14:16:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:32.273 14:16:10 -- scripts/common.sh@394 -- # pt= 00:05:32.273 14:16:10 -- scripts/common.sh@395 -- # return 1 00:05:32.273 14:16:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:32.273 1+0 records in 00:05:32.273 1+0 records out 00:05:32.273 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00336856 s, 311 MB/s 00:05:32.273 14:16:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:32.273 14:16:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:32.273 14:16:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:32.273 14:16:10 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:32.273 14:16:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:32.273 No valid GPT data, bailing 00:05:32.273 14:16:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:32.273 14:16:10 -- scripts/common.sh@394 -- # pt= 00:05:32.273 14:16:10 -- scripts/common.sh@395 -- # return 1 00:05:32.273 14:16:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:32.273 1+0 records in 00:05:32.273 1+0 records out 00:05:32.273 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00430301 s, 244 MB/s 00:05:32.273 14:16:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:32.273 14:16:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:32.273 14:16:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:32.273 14:16:10 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:32.273 14:16:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:32.273 No valid GPT data, bailing 00:05:32.273 14:16:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:32.273 14:16:10 -- scripts/common.sh@394 -- # pt= 00:05:32.273 14:16:10 -- scripts/common.sh@395 -- # return 1 00:05:32.273 14:16:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:32.273 1+0 records in 00:05:32.273 1+0 records out 00:05:32.273 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508324 s, 206 MB/s 00:05:32.273 14:16:10 -- spdk/autotest.sh@105 -- # sync 00:05:32.273 14:16:10 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:32.273 14:16:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:32.273 14:16:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:34.176 14:16:12 -- spdk/autotest.sh@111 -- # uname -s 00:05:34.176 14:16:12 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:34.176 14:16:12 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:34.176 14:16:12 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:34.744 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.744 Hugepages 00:05:34.744 node hugesize free / total 00:05:34.744 node0 1048576kB 0 / 0 00:05:34.744 node0 2048kB 0 / 0 00:05:34.744 00:05:34.744 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:34.744 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:34.744 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:35.004 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:35.004 14:16:13 -- spdk/autotest.sh@117 -- # uname -s 00:05:35.004 14:16:13 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:35.004 14:16:13 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:35.004 14:16:13 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:35.573 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.832 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.832 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.832 14:16:14 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:36.769 14:16:15 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:36.769 14:16:15 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:36.769 14:16:15 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:36.769 14:16:15 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:36.769 14:16:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:36.769 14:16:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:36.769 14:16:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:36.769 14:16:15 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:36.769 14:16:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:36.769 14:16:15 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:36.769 14:16:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:36.769 14:16:15 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:37.335 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.335 Waiting for block devices as requested 00:05:37.335 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:37.335 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:37.595 14:16:16 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:37.595 14:16:16 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:37.595 14:16:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:37.595 14:16:16 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:37.595 14:16:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:37.595 14:16:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:37.595 14:16:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:37.595 14:16:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:37.595 14:16:16 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:37.595 14:16:16 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:37.595 14:16:16 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:37.595 14:16:16 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:37.595 14:16:16 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:37.595 14:16:16 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:37.595 14:16:16 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:37.595 14:16:16 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:37.595 14:16:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:37.595 14:16:16 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:37.595 14:16:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:37.595 14:16:16 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:37.595 14:16:16 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:37.595 14:16:16 -- common/autotest_common.sh@1543 -- # continue 00:05:37.595 14:16:16 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:37.595 14:16:16 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:37.595 14:16:16 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:37.595 14:16:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:37.595 14:16:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:37.595 14:16:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:37.595 14:16:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:37.595 14:16:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:37.595 14:16:16 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:37.595 14:16:16 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:37.595 14:16:16 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:37.595 14:16:16 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:37.595 14:16:16 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:37.595 14:16:16 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:37.595 14:16:16 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:37.595 14:16:16 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:37.595 14:16:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:37.595 14:16:16 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:37.595 14:16:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:37.595 14:16:16 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:37.595 14:16:16 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:37.595 14:16:16 -- common/autotest_common.sh@1543 -- # continue 00:05:37.595 14:16:16 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:37.595 14:16:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:37.595 14:16:16 -- common/autotest_common.sh@10 -- # set +x 00:05:37.595 14:16:16 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:37.595 14:16:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:37.595 14:16:16 -- common/autotest_common.sh@10 -- # set +x 00:05:37.595 14:16:16 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:38.163 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:38.163 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.421 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.421 14:16:17 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:38.421 14:16:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:38.421 14:16:17 -- common/autotest_common.sh@10 -- # set +x 00:05:38.421 14:16:17 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:38.421 14:16:17 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:38.421 14:16:17 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:38.421 14:16:17 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:38.421 14:16:17 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:38.421 14:16:17 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:38.421 14:16:17 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:38.421 14:16:17 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:38.421 14:16:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:38.421 14:16:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:38.421 14:16:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:38.421 14:16:17 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:38.421 14:16:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:38.422 14:16:17 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:38.422 14:16:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:38.422 14:16:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:38.422 14:16:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:38.422 14:16:17 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:38.422 14:16:17 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:38.422 14:16:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:38.422 14:16:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:38.422 14:16:17 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:38.422 14:16:17 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:38.422 14:16:17 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:38.422 14:16:17 -- common/autotest_common.sh@1572 -- # return 0 00:05:38.422 14:16:17 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:38.422 14:16:17 -- common/autotest_common.sh@1580 -- # return 0 00:05:38.422 14:16:17 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:38.422 14:16:17 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:38.422 14:16:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:38.422 14:16:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:38.422 14:16:17 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:38.422 14:16:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:38.422 14:16:17 -- common/autotest_common.sh@10 -- # set +x 00:05:38.680 14:16:17 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:38.680 14:16:17 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:38.680 14:16:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.680 14:16:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.680 14:16:17 -- common/autotest_common.sh@10 -- # set +x 00:05:38.680 ************************************ 00:05:38.680 START TEST env 00:05:38.680 ************************************ 00:05:38.680 14:16:17 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:38.680 * Looking for test storage... 00:05:38.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:38.680 14:16:17 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:38.680 14:16:17 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:38.680 14:16:17 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:38.680 14:16:17 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:38.680 14:16:17 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.680 14:16:17 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.680 14:16:17 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.680 14:16:17 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.680 14:16:17 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.680 14:16:17 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.680 14:16:17 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.680 14:16:17 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.680 14:16:17 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.680 14:16:17 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.680 14:16:17 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.680 14:16:17 env -- scripts/common.sh@344 -- # case "$op" in 00:05:38.680 14:16:17 env -- scripts/common.sh@345 -- # : 1 00:05:38.680 14:16:17 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.680 14:16:17 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.680 14:16:17 env -- scripts/common.sh@365 -- # decimal 1 00:05:38.680 14:16:17 env -- scripts/common.sh@353 -- # local d=1 00:05:38.680 14:16:17 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.680 14:16:17 env -- scripts/common.sh@355 -- # echo 1 00:05:38.680 14:16:17 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.680 14:16:17 env -- scripts/common.sh@366 -- # decimal 2 00:05:38.680 14:16:17 env -- scripts/common.sh@353 -- # local d=2 00:05:38.681 14:16:17 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.681 14:16:17 env -- scripts/common.sh@355 -- # echo 2 00:05:38.681 14:16:17 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.681 14:16:17 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.681 14:16:17 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.681 14:16:17 env -- scripts/common.sh@368 -- # return 0 00:05:38.681 14:16:17 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.681 14:16:17 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:38.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.681 --rc genhtml_branch_coverage=1 00:05:38.681 --rc genhtml_function_coverage=1 00:05:38.681 --rc genhtml_legend=1 00:05:38.681 --rc geninfo_all_blocks=1 00:05:38.681 --rc geninfo_unexecuted_blocks=1 00:05:38.681 00:05:38.681 ' 00:05:38.681 14:16:17 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:38.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.681 --rc genhtml_branch_coverage=1 00:05:38.681 --rc genhtml_function_coverage=1 00:05:38.681 --rc genhtml_legend=1 00:05:38.681 --rc geninfo_all_blocks=1 00:05:38.681 --rc geninfo_unexecuted_blocks=1 00:05:38.681 00:05:38.681 ' 00:05:38.681 14:16:17 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:38.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.681 --rc genhtml_branch_coverage=1 00:05:38.681 --rc genhtml_function_coverage=1 00:05:38.681 --rc genhtml_legend=1 00:05:38.681 --rc geninfo_all_blocks=1 00:05:38.681 --rc geninfo_unexecuted_blocks=1 00:05:38.681 00:05:38.681 ' 00:05:38.681 14:16:17 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:38.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.681 --rc genhtml_branch_coverage=1 00:05:38.681 --rc genhtml_function_coverage=1 00:05:38.681 --rc genhtml_legend=1 00:05:38.681 --rc geninfo_all_blocks=1 00:05:38.681 --rc geninfo_unexecuted_blocks=1 00:05:38.681 00:05:38.681 ' 00:05:38.681 14:16:17 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:38.681 14:16:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.681 14:16:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.681 14:16:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.681 ************************************ 00:05:38.681 START TEST env_memory 00:05:38.681 ************************************ 00:05:38.681 14:16:17 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:38.940 00:05:38.940 00:05:38.940 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.940 http://cunit.sourceforge.net/ 00:05:38.940 00:05:38.940 00:05:38.940 Suite: memory 00:05:38.940 Test: alloc and free memory map ...[2024-11-20 14:16:17.707481] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:38.940 passed 00:05:38.940 Test: mem map translation ...[2024-11-20 14:16:17.757308] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:38.940 [2024-11-20 14:16:17.757406] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:38.940 [2024-11-20 14:16:17.757526] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:38.940 [2024-11-20 14:16:17.757572] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:38.940 passed 00:05:38.940 Test: mem map registration ...[2024-11-20 14:16:17.830581] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:38.940 [2024-11-20 14:16:17.830678] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:38.940 passed 00:05:39.199 Test: mem map adjacent registrations ...passed 00:05:39.199 00:05:39.199 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.199 suites 1 1 n/a 0 0 00:05:39.199 tests 4 4 4 0 0 00:05:39.199 asserts 152 152 152 0 n/a 00:05:39.199 00:05:39.199 Elapsed time = 0.269 seconds 00:05:39.199 00:05:39.199 real 0m0.309s 00:05:39.199 user 0m0.285s 00:05:39.199 sys 0m0.018s 00:05:39.199 14:16:17 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.199 14:16:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:39.199 ************************************ 00:05:39.199 END TEST env_memory 00:05:39.199 ************************************ 00:05:39.199 14:16:17 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:39.199 14:16:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.199 14:16:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.199 14:16:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.199 ************************************ 00:05:39.199 START TEST env_vtophys 00:05:39.199 ************************************ 00:05:39.199 14:16:17 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:39.199 EAL: lib.eal log level changed from notice to debug 00:05:39.199 EAL: Detected lcore 0 as core 0 on socket 0 00:05:39.199 EAL: Detected lcore 1 as core 0 on socket 0 00:05:39.200 EAL: Detected lcore 2 as core 0 on socket 0 00:05:39.200 EAL: Detected lcore 3 as core 0 on socket 0 00:05:39.200 EAL: Detected lcore 4 as core 0 on socket 0 00:05:39.200 EAL: Detected lcore 5 as core 0 on socket 0 00:05:39.200 EAL: Detected lcore 6 as core 0 on socket 0 00:05:39.200 EAL: Detected lcore 7 as core 0 on socket 0 00:05:39.200 EAL: Detected lcore 8 as core 0 on socket 0 00:05:39.200 EAL: Detected lcore 9 as core 0 on socket 0 00:05:39.200 EAL: Maximum logical cores by configuration: 128 00:05:39.200 EAL: Detected CPU lcores: 10 00:05:39.200 EAL: Detected NUMA nodes: 1 00:05:39.200 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:39.200 EAL: Detected shared linkage of DPDK 00:05:39.200 EAL: No shared files mode enabled, IPC will be disabled 00:05:39.200 EAL: Selected IOVA mode 'PA' 00:05:39.200 EAL: Probing VFIO support... 00:05:39.200 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:39.200 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:39.200 EAL: Ask a virtual area of 0x2e000 bytes 00:05:39.200 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:39.200 EAL: Setting up physically contiguous memory... 00:05:39.200 EAL: Setting maximum number of open files to 524288 00:05:39.200 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:39.200 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:39.200 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.200 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:39.200 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.200 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.200 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:39.200 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:39.200 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.200 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:39.200 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.200 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.200 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:39.200 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:39.200 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.200 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:39.200 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.200 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.200 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:39.200 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:39.200 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.200 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:39.200 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.200 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.200 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:39.200 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:39.200 EAL: Hugepages will be freed exactly as allocated. 00:05:39.200 EAL: No shared files mode enabled, IPC is disabled 00:05:39.200 EAL: No shared files mode enabled, IPC is disabled 00:05:39.459 EAL: TSC frequency is ~2200000 KHz 00:05:39.459 EAL: Main lcore 0 is ready (tid=7fbc1689aa40;cpuset=[0]) 00:05:39.459 EAL: Trying to obtain current memory policy. 00:05:39.459 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.459 EAL: Restoring previous memory policy: 0 00:05:39.459 EAL: request: mp_malloc_sync 00:05:39.459 EAL: No shared files mode enabled, IPC is disabled 00:05:39.459 EAL: Heap on socket 0 was expanded by 2MB 00:05:39.459 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:39.459 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:39.459 EAL: Mem event callback 'spdk:(nil)' registered 00:05:39.459 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:39.459 00:05:39.459 00:05:39.459 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.459 http://cunit.sourceforge.net/ 00:05:39.459 00:05:39.459 00:05:39.459 Suite: components_suite 00:05:40.025 Test: vtophys_malloc_test ...passed 00:05:40.025 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:40.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.026 EAL: Restoring previous memory policy: 4 00:05:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.026 EAL: request: mp_malloc_sync 00:05:40.026 EAL: No shared files mode enabled, IPC is disabled 00:05:40.026 EAL: Heap on socket 0 was expanded by 4MB 00:05:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.026 EAL: request: mp_malloc_sync 00:05:40.026 EAL: No shared files mode enabled, IPC is disabled 00:05:40.026 EAL: Heap on socket 0 was shrunk by 4MB 00:05:40.026 EAL: Trying to obtain current memory policy. 00:05:40.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.026 EAL: Restoring previous memory policy: 4 00:05:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.026 EAL: request: mp_malloc_sync 00:05:40.026 EAL: No shared files mode enabled, IPC is disabled 00:05:40.026 EAL: Heap on socket 0 was expanded by 6MB 00:05:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.026 EAL: request: mp_malloc_sync 00:05:40.026 EAL: No shared files mode enabled, IPC is disabled 00:05:40.026 EAL: Heap on socket 0 was shrunk by 6MB 00:05:40.026 EAL: Trying to obtain current memory policy. 00:05:40.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.026 EAL: Restoring previous memory policy: 4 00:05:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.026 EAL: request: mp_malloc_sync 00:05:40.026 EAL: No shared files mode enabled, IPC is disabled 00:05:40.026 EAL: Heap on socket 0 was expanded by 10MB 00:05:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.026 EAL: request: mp_malloc_sync 00:05:40.026 EAL: No shared files mode enabled, IPC is disabled 00:05:40.026 EAL: Heap on socket 0 was shrunk by 10MB 00:05:40.026 EAL: Trying to obtain current memory policy. 00:05:40.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.026 EAL: Restoring previous memory policy: 4 00:05:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.026 EAL: request: mp_malloc_sync 00:05:40.026 EAL: No shared files mode enabled, IPC is disabled 00:05:40.026 EAL: Heap on socket 0 was expanded by 18MB 00:05:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.026 EAL: request: mp_malloc_sync 00:05:40.026 EAL: No shared files mode enabled, IPC is disabled 00:05:40.026 EAL: Heap on socket 0 was shrunk by 18MB 00:05:40.026 EAL: Trying to obtain current memory policy. 00:05:40.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.026 EAL: Restoring previous memory policy: 4 00:05:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.026 EAL: request: mp_malloc_sync 00:05:40.026 EAL: No shared files mode enabled, IPC is disabled 00:05:40.026 EAL: Heap on socket 0 was expanded by 34MB 00:05:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.026 EAL: request: mp_malloc_sync 00:05:40.026 EAL: No shared files mode enabled, IPC is disabled 00:05:40.026 EAL: Heap on socket 0 was shrunk by 34MB 00:05:40.026 EAL: Trying to obtain current memory policy. 00:05:40.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.026 EAL: Restoring previous memory policy: 4 00:05:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.026 EAL: request: mp_malloc_sync 00:05:40.026 EAL: No shared files mode enabled, IPC is disabled 00:05:40.026 EAL: Heap on socket 0 was expanded by 66MB 00:05:40.285 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.285 EAL: request: mp_malloc_sync 00:05:40.285 EAL: No shared files mode enabled, IPC is disabled 00:05:40.285 EAL: Heap on socket 0 was shrunk by 66MB 00:05:40.285 EAL: Trying to obtain current memory policy. 00:05:40.285 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.285 EAL: Restoring previous memory policy: 4 00:05:40.285 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.285 EAL: request: mp_malloc_sync 00:05:40.285 EAL: No shared files mode enabled, IPC is disabled 00:05:40.285 EAL: Heap on socket 0 was expanded by 130MB 00:05:40.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.543 EAL: request: mp_malloc_sync 00:05:40.543 EAL: No shared files mode enabled, IPC is disabled 00:05:40.543 EAL: Heap on socket 0 was shrunk by 130MB 00:05:40.802 EAL: Trying to obtain current memory policy. 00:05:40.802 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.802 EAL: Restoring previous memory policy: 4 00:05:40.802 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.802 EAL: request: mp_malloc_sync 00:05:40.802 EAL: No shared files mode enabled, IPC is disabled 00:05:40.802 EAL: Heap on socket 0 was expanded by 258MB 00:05:41.369 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.369 EAL: request: mp_malloc_sync 00:05:41.369 EAL: No shared files mode enabled, IPC is disabled 00:05:41.369 EAL: Heap on socket 0 was shrunk by 258MB 00:05:41.628 EAL: Trying to obtain current memory policy. 00:05:41.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.628 EAL: Restoring previous memory policy: 4 00:05:41.628 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.628 EAL: request: mp_malloc_sync 00:05:41.628 EAL: No shared files mode enabled, IPC is disabled 00:05:41.628 EAL: Heap on socket 0 was expanded by 514MB 00:05:42.565 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.824 EAL: request: mp_malloc_sync 00:05:42.824 EAL: No shared files mode enabled, IPC is disabled 00:05:42.824 EAL: Heap on socket 0 was shrunk by 514MB 00:05:43.393 EAL: Trying to obtain current memory policy. 00:05:43.393 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.652 EAL: Restoring previous memory policy: 4 00:05:43.652 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.652 EAL: request: mp_malloc_sync 00:05:43.652 EAL: No shared files mode enabled, IPC is disabled 00:05:43.652 EAL: Heap on socket 0 was expanded by 1026MB 00:05:45.556 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.556 EAL: request: mp_malloc_sync 00:05:45.556 EAL: No shared files mode enabled, IPC is disabled 00:05:45.556 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:47.461 passed 00:05:47.461 00:05:47.461 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.461 suites 1 1 n/a 0 0 00:05:47.461 tests 2 2 2 0 0 00:05:47.461 asserts 5684 5684 5684 0 n/a 00:05:47.461 00:05:47.461 Elapsed time = 7.683 seconds 00:05:47.461 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.461 EAL: request: mp_malloc_sync 00:05:47.461 EAL: No shared files mode enabled, IPC is disabled 00:05:47.461 EAL: Heap on socket 0 was shrunk by 2MB 00:05:47.461 EAL: No shared files mode enabled, IPC is disabled 00:05:47.461 EAL: No shared files mode enabled, IPC is disabled 00:05:47.461 EAL: No shared files mode enabled, IPC is disabled 00:05:47.461 00:05:47.461 real 0m8.018s 00:05:47.461 user 0m6.751s 00:05:47.461 sys 0m1.094s 00:05:47.461 14:16:26 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.461 14:16:26 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:47.461 ************************************ 00:05:47.461 END TEST env_vtophys 00:05:47.461 ************************************ 00:05:47.461 14:16:26 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:47.461 14:16:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.461 14:16:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.461 14:16:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.461 ************************************ 00:05:47.461 START TEST env_pci 00:05:47.461 ************************************ 00:05:47.461 14:16:26 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:47.461 00:05:47.461 00:05:47.461 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.461 http://cunit.sourceforge.net/ 00:05:47.461 00:05:47.461 00:05:47.461 Suite: pci 00:05:47.461 Test: pci_hook ...[2024-11-20 14:16:26.106604] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56752 has claimed it 00:05:47.461 passed 00:05:47.461 00:05:47.461 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.461 suites 1 1 n/a 0 0 00:05:47.461 tests 1 1 1 0 0 00:05:47.461 asserts 25 25 25 0 n/a 00:05:47.461 00:05:47.461 Elapsed time = 0.008 secondsEAL: Cannot find device (10000:00:01.0) 00:05:47.461 EAL: Failed to attach device on primary process 00:05:47.461 00:05:47.461 00:05:47.461 real 0m0.085s 00:05:47.461 user 0m0.036s 00:05:47.461 sys 0m0.048s 00:05:47.461 14:16:26 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.461 14:16:26 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:47.461 ************************************ 00:05:47.461 END TEST env_pci 00:05:47.461 ************************************ 00:05:47.461 14:16:26 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:47.461 14:16:26 env -- env/env.sh@15 -- # uname 00:05:47.461 14:16:26 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:47.461 14:16:26 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:47.461 14:16:26 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:47.461 14:16:26 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:47.461 14:16:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.461 14:16:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.461 ************************************ 00:05:47.461 START TEST env_dpdk_post_init 00:05:47.461 ************************************ 00:05:47.461 14:16:26 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:47.461 EAL: Detected CPU lcores: 10 00:05:47.461 EAL: Detected NUMA nodes: 1 00:05:47.461 EAL: Detected shared linkage of DPDK 00:05:47.461 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:47.461 EAL: Selected IOVA mode 'PA' 00:05:47.461 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:47.720 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:47.720 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:47.720 Starting DPDK initialization... 00:05:47.720 Starting SPDK post initialization... 00:05:47.720 SPDK NVMe probe 00:05:47.720 Attaching to 0000:00:10.0 00:05:47.720 Attaching to 0000:00:11.0 00:05:47.720 Attached to 0000:00:10.0 00:05:47.720 Attached to 0000:00:11.0 00:05:47.720 Cleaning up... 00:05:47.720 00:05:47.720 real 0m0.309s 00:05:47.720 user 0m0.104s 00:05:47.720 sys 0m0.104s 00:05:47.720 14:16:26 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.720 14:16:26 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:47.720 ************************************ 00:05:47.720 END TEST env_dpdk_post_init 00:05:47.720 ************************************ 00:05:47.720 14:16:26 env -- env/env.sh@26 -- # uname 00:05:47.720 14:16:26 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:47.720 14:16:26 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:47.720 14:16:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.720 14:16:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.720 14:16:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.720 ************************************ 00:05:47.720 START TEST env_mem_callbacks 00:05:47.720 ************************************ 00:05:47.720 14:16:26 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:47.720 EAL: Detected CPU lcores: 10 00:05:47.720 EAL: Detected NUMA nodes: 1 00:05:47.720 EAL: Detected shared linkage of DPDK 00:05:47.720 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:47.720 EAL: Selected IOVA mode 'PA' 00:05:47.979 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:47.979 00:05:47.979 00:05:47.979 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.979 http://cunit.sourceforge.net/ 00:05:47.979 00:05:47.979 00:05:47.979 Suite: memory 00:05:47.979 Test: test ... 00:05:47.979 register 0x200000200000 2097152 00:05:47.979 malloc 3145728 00:05:47.979 register 0x200000400000 4194304 00:05:47.979 buf 0x2000004fffc0 len 3145728 PASSED 00:05:47.979 malloc 64 00:05:47.979 buf 0x2000004ffec0 len 64 PASSED 00:05:47.979 malloc 4194304 00:05:47.979 register 0x200000800000 6291456 00:05:47.979 buf 0x2000009fffc0 len 4194304 PASSED 00:05:47.979 free 0x2000004fffc0 3145728 00:05:47.979 free 0x2000004ffec0 64 00:05:47.979 unregister 0x200000400000 4194304 PASSED 00:05:47.979 free 0x2000009fffc0 4194304 00:05:47.979 unregister 0x200000800000 6291456 PASSED 00:05:47.979 malloc 8388608 00:05:47.979 register 0x200000400000 10485760 00:05:47.979 buf 0x2000005fffc0 len 8388608 PASSED 00:05:47.979 free 0x2000005fffc0 8388608 00:05:47.979 unregister 0x200000400000 10485760 PASSED 00:05:47.979 passed 00:05:47.979 00:05:47.979 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.979 suites 1 1 n/a 0 0 00:05:47.979 tests 1 1 1 0 0 00:05:47.979 asserts 15 15 15 0 n/a 00:05:47.979 00:05:47.979 Elapsed time = 0.096 seconds 00:05:47.979 00:05:47.979 real 0m0.320s 00:05:47.979 user 0m0.137s 00:05:47.979 sys 0m0.080s 00:05:47.979 14:16:26 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.979 14:16:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:47.979 ************************************ 00:05:47.979 END TEST env_mem_callbacks 00:05:47.979 ************************************ 00:05:47.979 00:05:47.979 real 0m9.530s 00:05:47.979 user 0m7.549s 00:05:47.979 sys 0m1.591s 00:05:47.979 14:16:26 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.979 14:16:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.979 ************************************ 00:05:47.979 END TEST env 00:05:47.979 ************************************ 00:05:48.238 14:16:26 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:48.238 14:16:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.238 14:16:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.238 14:16:26 -- common/autotest_common.sh@10 -- # set +x 00:05:48.238 ************************************ 00:05:48.238 START TEST rpc 00:05:48.238 ************************************ 00:05:48.238 14:16:26 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:48.238 * Looking for test storage... 00:05:48.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:48.238 14:16:27 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:48.238 14:16:27 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:48.238 14:16:27 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:48.238 14:16:27 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:48.238 14:16:27 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.238 14:16:27 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.238 14:16:27 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.238 14:16:27 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.238 14:16:27 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.238 14:16:27 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.238 14:16:27 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.238 14:16:27 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.238 14:16:27 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.238 14:16:27 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.238 14:16:27 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.238 14:16:27 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:48.238 14:16:27 rpc -- scripts/common.sh@345 -- # : 1 00:05:48.238 14:16:27 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.238 14:16:27 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.238 14:16:27 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:48.238 14:16:27 rpc -- scripts/common.sh@353 -- # local d=1 00:05:48.238 14:16:27 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.238 14:16:27 rpc -- scripts/common.sh@355 -- # echo 1 00:05:48.238 14:16:27 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.238 14:16:27 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:48.238 14:16:27 rpc -- scripts/common.sh@353 -- # local d=2 00:05:48.238 14:16:27 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.238 14:16:27 rpc -- scripts/common.sh@355 -- # echo 2 00:05:48.238 14:16:27 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.238 14:16:27 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.238 14:16:27 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.238 14:16:27 rpc -- scripts/common.sh@368 -- # return 0 00:05:48.238 14:16:27 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.238 14:16:27 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:48.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.238 --rc genhtml_branch_coverage=1 00:05:48.238 --rc genhtml_function_coverage=1 00:05:48.238 --rc genhtml_legend=1 00:05:48.238 --rc geninfo_all_blocks=1 00:05:48.238 --rc geninfo_unexecuted_blocks=1 00:05:48.238 00:05:48.238 ' 00:05:48.238 14:16:27 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:48.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.238 --rc genhtml_branch_coverage=1 00:05:48.238 --rc genhtml_function_coverage=1 00:05:48.238 --rc genhtml_legend=1 00:05:48.238 --rc geninfo_all_blocks=1 00:05:48.238 --rc geninfo_unexecuted_blocks=1 00:05:48.238 00:05:48.238 ' 00:05:48.238 14:16:27 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:48.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.238 --rc genhtml_branch_coverage=1 00:05:48.238 --rc genhtml_function_coverage=1 00:05:48.238 --rc genhtml_legend=1 00:05:48.238 --rc geninfo_all_blocks=1 00:05:48.238 --rc geninfo_unexecuted_blocks=1 00:05:48.238 00:05:48.238 ' 00:05:48.238 14:16:27 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:48.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.238 --rc genhtml_branch_coverage=1 00:05:48.238 --rc genhtml_function_coverage=1 00:05:48.238 --rc genhtml_legend=1 00:05:48.238 --rc geninfo_all_blocks=1 00:05:48.238 --rc geninfo_unexecuted_blocks=1 00:05:48.238 00:05:48.238 ' 00:05:48.238 14:16:27 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56879 00:05:48.238 14:16:27 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.238 14:16:27 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56879 00:05:48.238 14:16:27 rpc -- common/autotest_common.sh@835 -- # '[' -z 56879 ']' 00:05:48.238 14:16:27 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:48.238 14:16:27 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.238 14:16:27 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.238 14:16:27 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.238 14:16:27 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.238 14:16:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.496 [2024-11-20 14:16:27.353880] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:05:48.496 [2024-11-20 14:16:27.354387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56879 ] 00:05:48.755 [2024-11-20 14:16:27.554052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.755 [2024-11-20 14:16:27.714242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:48.755 [2024-11-20 14:16:27.715511] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56879' to capture a snapshot of events at runtime. 00:05:48.755 [2024-11-20 14:16:27.715855] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:48.755 [2024-11-20 14:16:27.716026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:48.755 [2024-11-20 14:16:27.716053] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56879 for offline analysis/debug. 00:05:48.755 [2024-11-20 14:16:27.717729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.723 14:16:28 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.723 14:16:28 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:49.723 14:16:28 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:49.723 14:16:28 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:49.723 14:16:28 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:49.723 14:16:28 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:49.723 14:16:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.723 14:16:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.723 14:16:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.723 ************************************ 00:05:49.723 START TEST rpc_integrity 00:05:49.723 ************************************ 00:05:49.723 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:49.723 14:16:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:49.723 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.723 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.723 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.723 14:16:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:49.723 14:16:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:49.982 14:16:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:49.982 14:16:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:49.982 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.982 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.982 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.982 14:16:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:49.982 14:16:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:49.982 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.982 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.982 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.982 14:16:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:49.982 { 00:05:49.982 "name": "Malloc0", 00:05:49.982 "aliases": [ 00:05:49.982 "09ea2647-7bd7-4a78-b188-9274f0082c00" 00:05:49.982 ], 00:05:49.982 "product_name": "Malloc disk", 00:05:49.982 "block_size": 512, 00:05:49.982 "num_blocks": 16384, 00:05:49.982 "uuid": "09ea2647-7bd7-4a78-b188-9274f0082c00", 00:05:49.982 "assigned_rate_limits": { 00:05:49.982 "rw_ios_per_sec": 0, 00:05:49.982 "rw_mbytes_per_sec": 0, 00:05:49.982 "r_mbytes_per_sec": 0, 00:05:49.982 "w_mbytes_per_sec": 0 00:05:49.982 }, 00:05:49.982 "claimed": false, 00:05:49.982 "zoned": false, 00:05:49.982 "supported_io_types": { 00:05:49.982 "read": true, 00:05:49.982 "write": true, 00:05:49.982 "unmap": true, 00:05:49.982 "flush": true, 00:05:49.982 "reset": true, 00:05:49.982 "nvme_admin": false, 00:05:49.982 "nvme_io": false, 00:05:49.982 "nvme_io_md": false, 00:05:49.982 "write_zeroes": true, 00:05:49.982 "zcopy": true, 00:05:49.982 "get_zone_info": false, 00:05:49.982 "zone_management": false, 00:05:49.982 "zone_append": false, 00:05:49.982 "compare": false, 00:05:49.982 "compare_and_write": false, 00:05:49.982 "abort": true, 00:05:49.982 "seek_hole": false, 00:05:49.982 "seek_data": false, 00:05:49.982 "copy": true, 00:05:49.982 "nvme_iov_md": false 00:05:49.982 }, 00:05:49.982 "memory_domains": [ 00:05:49.982 { 00:05:49.982 "dma_device_id": "system", 00:05:49.982 "dma_device_type": 1 00:05:49.982 }, 00:05:49.982 { 00:05:49.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.982 "dma_device_type": 2 00:05:49.982 } 00:05:49.982 ], 00:05:49.982 "driver_specific": {} 00:05:49.982 } 00:05:49.982 ]' 00:05:49.982 14:16:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:49.982 14:16:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:49.982 14:16:28 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:49.982 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.982 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.982 [2024-11-20 14:16:28.822534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:49.982 [2024-11-20 14:16:28.822611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:49.982 [2024-11-20 14:16:28.822643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:49.982 [2024-11-20 14:16:28.822666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:49.983 [2024-11-20 14:16:28.825706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:49.983 [2024-11-20 14:16:28.825886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:49.983 Passthru0 00:05:49.983 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.983 14:16:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:49.983 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.983 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.983 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.983 14:16:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:49.983 { 00:05:49.983 "name": "Malloc0", 00:05:49.983 "aliases": [ 00:05:49.983 "09ea2647-7bd7-4a78-b188-9274f0082c00" 00:05:49.983 ], 00:05:49.983 "product_name": "Malloc disk", 00:05:49.983 "block_size": 512, 00:05:49.983 "num_blocks": 16384, 00:05:49.983 "uuid": "09ea2647-7bd7-4a78-b188-9274f0082c00", 00:05:49.983 "assigned_rate_limits": { 00:05:49.983 "rw_ios_per_sec": 0, 00:05:49.983 "rw_mbytes_per_sec": 0, 00:05:49.983 "r_mbytes_per_sec": 0, 00:05:49.983 "w_mbytes_per_sec": 0 00:05:49.983 }, 00:05:49.983 "claimed": true, 00:05:49.983 "claim_type": "exclusive_write", 00:05:49.983 "zoned": false, 00:05:49.983 "supported_io_types": { 00:05:49.983 "read": true, 00:05:49.983 "write": true, 00:05:49.983 "unmap": true, 00:05:49.983 "flush": true, 00:05:49.983 "reset": true, 00:05:49.983 "nvme_admin": false, 00:05:49.983 "nvme_io": false, 00:05:49.983 "nvme_io_md": false, 00:05:49.983 "write_zeroes": true, 00:05:49.983 "zcopy": true, 00:05:49.983 "get_zone_info": false, 00:05:49.983 "zone_management": false, 00:05:49.983 "zone_append": false, 00:05:49.983 "compare": false, 00:05:49.983 "compare_and_write": false, 00:05:49.983 "abort": true, 00:05:49.983 "seek_hole": false, 00:05:49.983 "seek_data": false, 00:05:49.983 "copy": true, 00:05:49.983 "nvme_iov_md": false 00:05:49.983 }, 00:05:49.983 "memory_domains": [ 00:05:49.983 { 00:05:49.983 "dma_device_id": "system", 00:05:49.983 "dma_device_type": 1 00:05:49.983 }, 00:05:49.983 { 00:05:49.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.983 "dma_device_type": 2 00:05:49.983 } 00:05:49.983 ], 00:05:49.983 "driver_specific": {} 00:05:49.983 }, 00:05:49.983 { 00:05:49.983 "name": "Passthru0", 00:05:49.983 "aliases": [ 00:05:49.983 "fc2eb819-bc0b-53e2-b997-1842453da347" 00:05:49.983 ], 00:05:49.983 "product_name": "passthru", 00:05:49.983 "block_size": 512, 00:05:49.983 "num_blocks": 16384, 00:05:49.983 "uuid": "fc2eb819-bc0b-53e2-b997-1842453da347", 00:05:49.983 "assigned_rate_limits": { 00:05:49.983 "rw_ios_per_sec": 0, 00:05:49.983 "rw_mbytes_per_sec": 0, 00:05:49.983 "r_mbytes_per_sec": 0, 00:05:49.983 "w_mbytes_per_sec": 0 00:05:49.983 }, 00:05:49.983 "claimed": false, 00:05:49.983 "zoned": false, 00:05:49.983 "supported_io_types": { 00:05:49.983 "read": true, 00:05:49.983 "write": true, 00:05:49.983 "unmap": true, 00:05:49.983 "flush": true, 00:05:49.983 "reset": true, 00:05:49.983 "nvme_admin": false, 00:05:49.983 "nvme_io": false, 00:05:49.983 "nvme_io_md": false, 00:05:49.983 "write_zeroes": true, 00:05:49.983 "zcopy": true, 00:05:49.983 "get_zone_info": false, 00:05:49.983 "zone_management": false, 00:05:49.983 "zone_append": false, 00:05:49.983 "compare": false, 00:05:49.983 "compare_and_write": false, 00:05:49.983 "abort": true, 00:05:49.983 "seek_hole": false, 00:05:49.983 "seek_data": false, 00:05:49.983 "copy": true, 00:05:49.983 "nvme_iov_md": false 00:05:49.983 }, 00:05:49.983 "memory_domains": [ 00:05:49.983 { 00:05:49.983 "dma_device_id": "system", 00:05:49.983 "dma_device_type": 1 00:05:49.983 }, 00:05:49.983 { 00:05:49.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.983 "dma_device_type": 2 00:05:49.983 } 00:05:49.983 ], 00:05:49.983 "driver_specific": { 00:05:49.983 "passthru": { 00:05:49.983 "name": "Passthru0", 00:05:49.983 "base_bdev_name": "Malloc0" 00:05:49.983 } 00:05:49.983 } 00:05:49.983 } 00:05:49.983 ]' 00:05:49.983 14:16:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:49.983 14:16:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:49.983 14:16:28 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:49.983 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.983 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.983 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.983 14:16:28 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:49.983 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.983 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.983 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.983 14:16:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:49.983 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.983 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.983 14:16:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.983 14:16:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:49.983 14:16:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:50.242 ************************************ 00:05:50.242 END TEST rpc_integrity 00:05:50.242 ************************************ 00:05:50.242 14:16:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:50.242 00:05:50.242 real 0m0.353s 00:05:50.242 user 0m0.216s 00:05:50.242 sys 0m0.042s 00:05:50.242 14:16:29 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.242 14:16:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.242 14:16:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:50.242 14:16:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.242 14:16:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.242 14:16:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.242 ************************************ 00:05:50.242 START TEST rpc_plugins 00:05:50.242 ************************************ 00:05:50.242 14:16:29 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:50.242 14:16:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:50.242 14:16:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.242 14:16:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.242 14:16:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.243 14:16:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:50.243 14:16:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:50.243 14:16:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.243 14:16:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.243 14:16:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.243 14:16:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:50.243 { 00:05:50.243 "name": "Malloc1", 00:05:50.243 "aliases": [ 00:05:50.243 "216decb4-995e-4cd9-8181-521088590ac9" 00:05:50.243 ], 00:05:50.243 "product_name": "Malloc disk", 00:05:50.243 "block_size": 4096, 00:05:50.243 "num_blocks": 256, 00:05:50.243 "uuid": "216decb4-995e-4cd9-8181-521088590ac9", 00:05:50.243 "assigned_rate_limits": { 00:05:50.243 "rw_ios_per_sec": 0, 00:05:50.243 "rw_mbytes_per_sec": 0, 00:05:50.243 "r_mbytes_per_sec": 0, 00:05:50.243 "w_mbytes_per_sec": 0 00:05:50.243 }, 00:05:50.243 "claimed": false, 00:05:50.243 "zoned": false, 00:05:50.243 "supported_io_types": { 00:05:50.243 "read": true, 00:05:50.243 "write": true, 00:05:50.243 "unmap": true, 00:05:50.243 "flush": true, 00:05:50.243 "reset": true, 00:05:50.243 "nvme_admin": false, 00:05:50.243 "nvme_io": false, 00:05:50.243 "nvme_io_md": false, 00:05:50.243 "write_zeroes": true, 00:05:50.243 "zcopy": true, 00:05:50.243 "get_zone_info": false, 00:05:50.243 "zone_management": false, 00:05:50.243 "zone_append": false, 00:05:50.243 "compare": false, 00:05:50.243 "compare_and_write": false, 00:05:50.243 "abort": true, 00:05:50.243 "seek_hole": false, 00:05:50.243 "seek_data": false, 00:05:50.243 "copy": true, 00:05:50.243 "nvme_iov_md": false 00:05:50.243 }, 00:05:50.243 "memory_domains": [ 00:05:50.243 { 00:05:50.243 "dma_device_id": "system", 00:05:50.243 "dma_device_type": 1 00:05:50.243 }, 00:05:50.243 { 00:05:50.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.243 "dma_device_type": 2 00:05:50.243 } 00:05:50.243 ], 00:05:50.243 "driver_specific": {} 00:05:50.243 } 00:05:50.243 ]' 00:05:50.243 14:16:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:50.243 14:16:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:50.243 14:16:29 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:50.243 14:16:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.243 14:16:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.243 14:16:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.243 14:16:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:50.243 14:16:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.243 14:16:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.243 14:16:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.243 14:16:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:50.243 14:16:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:50.505 ************************************ 00:05:50.505 END TEST rpc_plugins 00:05:50.505 ************************************ 00:05:50.505 14:16:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:50.505 00:05:50.505 real 0m0.168s 00:05:50.506 user 0m0.110s 00:05:50.506 sys 0m0.015s 00:05:50.506 14:16:29 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.506 14:16:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.506 14:16:29 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:50.506 14:16:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.506 14:16:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.506 14:16:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.506 ************************************ 00:05:50.506 START TEST rpc_trace_cmd_test 00:05:50.506 ************************************ 00:05:50.506 14:16:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:50.506 14:16:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:50.506 14:16:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:50.506 14:16:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.506 14:16:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.506 14:16:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.506 14:16:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:50.506 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56879", 00:05:50.506 "tpoint_group_mask": "0x8", 00:05:50.506 "iscsi_conn": { 00:05:50.506 "mask": "0x2", 00:05:50.506 "tpoint_mask": "0x0" 00:05:50.506 }, 00:05:50.506 "scsi": { 00:05:50.506 "mask": "0x4", 00:05:50.506 "tpoint_mask": "0x0" 00:05:50.506 }, 00:05:50.506 "bdev": { 00:05:50.506 "mask": "0x8", 00:05:50.506 "tpoint_mask": "0xffffffffffffffff" 00:05:50.506 }, 00:05:50.506 "nvmf_rdma": { 00:05:50.506 "mask": "0x10", 00:05:50.506 "tpoint_mask": "0x0" 00:05:50.506 }, 00:05:50.506 "nvmf_tcp": { 00:05:50.506 "mask": "0x20", 00:05:50.506 "tpoint_mask": "0x0" 00:05:50.506 }, 00:05:50.506 "ftl": { 00:05:50.506 "mask": "0x40", 00:05:50.506 "tpoint_mask": "0x0" 00:05:50.506 }, 00:05:50.506 "blobfs": { 00:05:50.506 "mask": "0x80", 00:05:50.506 "tpoint_mask": "0x0" 00:05:50.506 }, 00:05:50.506 "dsa": { 00:05:50.506 "mask": "0x200", 00:05:50.506 "tpoint_mask": "0x0" 00:05:50.506 }, 00:05:50.506 "thread": { 00:05:50.506 "mask": "0x400", 00:05:50.506 "tpoint_mask": "0x0" 00:05:50.506 }, 00:05:50.506 "nvme_pcie": { 00:05:50.506 "mask": "0x800", 00:05:50.506 "tpoint_mask": "0x0" 00:05:50.506 }, 00:05:50.506 "iaa": { 00:05:50.506 "mask": "0x1000", 00:05:50.506 "tpoint_mask": "0x0" 00:05:50.506 }, 00:05:50.506 "nvme_tcp": { 00:05:50.506 "mask": "0x2000", 00:05:50.506 "tpoint_mask": "0x0" 00:05:50.506 }, 00:05:50.506 "bdev_nvme": { 00:05:50.506 "mask": "0x4000", 00:05:50.506 "tpoint_mask": "0x0" 00:05:50.506 }, 00:05:50.506 "sock": { 00:05:50.506 "mask": "0x8000", 00:05:50.506 "tpoint_mask": "0x0" 00:05:50.506 }, 00:05:50.506 "blob": { 00:05:50.506 "mask": "0x10000", 00:05:50.506 "tpoint_mask": "0x0" 00:05:50.506 }, 00:05:50.506 "bdev_raid": { 00:05:50.506 "mask": "0x20000", 00:05:50.506 "tpoint_mask": "0x0" 00:05:50.506 }, 00:05:50.506 "scheduler": { 00:05:50.506 "mask": "0x40000", 00:05:50.506 "tpoint_mask": "0x0" 00:05:50.506 } 00:05:50.506 }' 00:05:50.506 14:16:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:50.506 14:16:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:50.506 14:16:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:50.506 14:16:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:50.506 14:16:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:50.506 14:16:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:50.506 14:16:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:50.769 14:16:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:50.769 14:16:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:50.769 ************************************ 00:05:50.769 END TEST rpc_trace_cmd_test 00:05:50.769 ************************************ 00:05:50.769 14:16:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:50.769 00:05:50.769 real 0m0.274s 00:05:50.769 user 0m0.242s 00:05:50.769 sys 0m0.021s 00:05:50.769 14:16:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.769 14:16:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.769 14:16:29 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:50.769 14:16:29 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:50.769 14:16:29 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:50.769 14:16:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.769 14:16:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.769 14:16:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.769 ************************************ 00:05:50.769 START TEST rpc_daemon_integrity 00:05:50.769 ************************************ 00:05:50.769 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:50.769 14:16:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:50.769 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.769 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.769 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.769 14:16:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:50.769 14:16:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:50.769 14:16:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:50.769 14:16:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:50.769 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.769 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.769 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.769 14:16:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:50.769 14:16:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:50.769 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.769 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.769 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.769 14:16:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:50.769 { 00:05:50.769 "name": "Malloc2", 00:05:50.769 "aliases": [ 00:05:50.769 "b4dc76d8-4930-4b3e-acb8-00bbfe31fac6" 00:05:50.769 ], 00:05:50.769 "product_name": "Malloc disk", 00:05:50.769 "block_size": 512, 00:05:50.769 "num_blocks": 16384, 00:05:50.769 "uuid": "b4dc76d8-4930-4b3e-acb8-00bbfe31fac6", 00:05:50.769 "assigned_rate_limits": { 00:05:50.769 "rw_ios_per_sec": 0, 00:05:50.769 "rw_mbytes_per_sec": 0, 00:05:50.769 "r_mbytes_per_sec": 0, 00:05:50.769 "w_mbytes_per_sec": 0 00:05:50.769 }, 00:05:50.769 "claimed": false, 00:05:50.769 "zoned": false, 00:05:50.769 "supported_io_types": { 00:05:50.769 "read": true, 00:05:50.769 "write": true, 00:05:50.769 "unmap": true, 00:05:50.769 "flush": true, 00:05:50.769 "reset": true, 00:05:50.769 "nvme_admin": false, 00:05:50.769 "nvme_io": false, 00:05:50.769 "nvme_io_md": false, 00:05:50.769 "write_zeroes": true, 00:05:50.769 "zcopy": true, 00:05:50.769 "get_zone_info": false, 00:05:50.769 "zone_management": false, 00:05:50.769 "zone_append": false, 00:05:50.769 "compare": false, 00:05:50.769 "compare_and_write": false, 00:05:50.769 "abort": true, 00:05:50.769 "seek_hole": false, 00:05:50.769 "seek_data": false, 00:05:50.769 "copy": true, 00:05:50.769 "nvme_iov_md": false 00:05:50.769 }, 00:05:50.769 "memory_domains": [ 00:05:50.769 { 00:05:50.769 "dma_device_id": "system", 00:05:50.769 "dma_device_type": 1 00:05:50.769 }, 00:05:50.769 { 00:05:50.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.769 "dma_device_type": 2 00:05:50.769 } 00:05:50.769 ], 00:05:50.769 "driver_specific": {} 00:05:50.769 } 00:05:50.769 ]' 00:05:50.769 14:16:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.029 [2024-11-20 14:16:29.770443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:51.029 [2024-11-20 14:16:29.770694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:51.029 [2024-11-20 14:16:29.770740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:51.029 [2024-11-20 14:16:29.770760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:51.029 [2024-11-20 14:16:29.773917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:51.029 [2024-11-20 14:16:29.774131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:51.029 Passthru0 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:51.029 { 00:05:51.029 "name": "Malloc2", 00:05:51.029 "aliases": [ 00:05:51.029 "b4dc76d8-4930-4b3e-acb8-00bbfe31fac6" 00:05:51.029 ], 00:05:51.029 "product_name": "Malloc disk", 00:05:51.029 "block_size": 512, 00:05:51.029 "num_blocks": 16384, 00:05:51.029 "uuid": "b4dc76d8-4930-4b3e-acb8-00bbfe31fac6", 00:05:51.029 "assigned_rate_limits": { 00:05:51.029 "rw_ios_per_sec": 0, 00:05:51.029 "rw_mbytes_per_sec": 0, 00:05:51.029 "r_mbytes_per_sec": 0, 00:05:51.029 "w_mbytes_per_sec": 0 00:05:51.029 }, 00:05:51.029 "claimed": true, 00:05:51.029 "claim_type": "exclusive_write", 00:05:51.029 "zoned": false, 00:05:51.029 "supported_io_types": { 00:05:51.029 "read": true, 00:05:51.029 "write": true, 00:05:51.029 "unmap": true, 00:05:51.029 "flush": true, 00:05:51.029 "reset": true, 00:05:51.029 "nvme_admin": false, 00:05:51.029 "nvme_io": false, 00:05:51.029 "nvme_io_md": false, 00:05:51.029 "write_zeroes": true, 00:05:51.029 "zcopy": true, 00:05:51.029 "get_zone_info": false, 00:05:51.029 "zone_management": false, 00:05:51.029 "zone_append": false, 00:05:51.029 "compare": false, 00:05:51.029 "compare_and_write": false, 00:05:51.029 "abort": true, 00:05:51.029 "seek_hole": false, 00:05:51.029 "seek_data": false, 00:05:51.029 "copy": true, 00:05:51.029 "nvme_iov_md": false 00:05:51.029 }, 00:05:51.029 "memory_domains": [ 00:05:51.029 { 00:05:51.029 "dma_device_id": "system", 00:05:51.029 "dma_device_type": 1 00:05:51.029 }, 00:05:51.029 { 00:05:51.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.029 "dma_device_type": 2 00:05:51.029 } 00:05:51.029 ], 00:05:51.029 "driver_specific": {} 00:05:51.029 }, 00:05:51.029 { 00:05:51.029 "name": "Passthru0", 00:05:51.029 "aliases": [ 00:05:51.029 "164bda7a-ef1a-5e6b-96a8-9557496a9013" 00:05:51.029 ], 00:05:51.029 "product_name": "passthru", 00:05:51.029 "block_size": 512, 00:05:51.029 "num_blocks": 16384, 00:05:51.029 "uuid": "164bda7a-ef1a-5e6b-96a8-9557496a9013", 00:05:51.029 "assigned_rate_limits": { 00:05:51.029 "rw_ios_per_sec": 0, 00:05:51.029 "rw_mbytes_per_sec": 0, 00:05:51.029 "r_mbytes_per_sec": 0, 00:05:51.029 "w_mbytes_per_sec": 0 00:05:51.029 }, 00:05:51.029 "claimed": false, 00:05:51.029 "zoned": false, 00:05:51.029 "supported_io_types": { 00:05:51.029 "read": true, 00:05:51.029 "write": true, 00:05:51.029 "unmap": true, 00:05:51.029 "flush": true, 00:05:51.029 "reset": true, 00:05:51.029 "nvme_admin": false, 00:05:51.029 "nvme_io": false, 00:05:51.029 "nvme_io_md": false, 00:05:51.029 "write_zeroes": true, 00:05:51.029 "zcopy": true, 00:05:51.029 "get_zone_info": false, 00:05:51.029 "zone_management": false, 00:05:51.029 "zone_append": false, 00:05:51.029 "compare": false, 00:05:51.029 "compare_and_write": false, 00:05:51.029 "abort": true, 00:05:51.029 "seek_hole": false, 00:05:51.029 "seek_data": false, 00:05:51.029 "copy": true, 00:05:51.029 "nvme_iov_md": false 00:05:51.029 }, 00:05:51.029 "memory_domains": [ 00:05:51.029 { 00:05:51.029 "dma_device_id": "system", 00:05:51.029 "dma_device_type": 1 00:05:51.029 }, 00:05:51.029 { 00:05:51.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.029 "dma_device_type": 2 00:05:51.029 } 00:05:51.029 ], 00:05:51.029 "driver_specific": { 00:05:51.029 "passthru": { 00:05:51.029 "name": "Passthru0", 00:05:51.029 "base_bdev_name": "Malloc2" 00:05:51.029 } 00:05:51.029 } 00:05:51.029 } 00:05:51.029 ]' 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:51.029 ************************************ 00:05:51.029 END TEST rpc_daemon_integrity 00:05:51.029 ************************************ 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:51.029 00:05:51.029 real 0m0.363s 00:05:51.029 user 0m0.224s 00:05:51.029 sys 0m0.044s 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.029 14:16:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.288 14:16:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:51.288 14:16:30 rpc -- rpc/rpc.sh@84 -- # killprocess 56879 00:05:51.288 14:16:30 rpc -- common/autotest_common.sh@954 -- # '[' -z 56879 ']' 00:05:51.288 14:16:30 rpc -- common/autotest_common.sh@958 -- # kill -0 56879 00:05:51.288 14:16:30 rpc -- common/autotest_common.sh@959 -- # uname 00:05:51.288 14:16:30 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.288 14:16:30 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56879 00:05:51.288 killing process with pid 56879 00:05:51.288 14:16:30 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.288 14:16:30 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.288 14:16:30 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56879' 00:05:51.288 14:16:30 rpc -- common/autotest_common.sh@973 -- # kill 56879 00:05:51.288 14:16:30 rpc -- common/autotest_common.sh@978 -- # wait 56879 00:05:53.823 00:05:53.823 real 0m5.248s 00:05:53.823 user 0m5.971s 00:05:53.823 sys 0m0.926s 00:05:53.823 14:16:32 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.823 14:16:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.823 ************************************ 00:05:53.823 END TEST rpc 00:05:53.823 ************************************ 00:05:53.823 14:16:32 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:53.823 14:16:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.823 14:16:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.823 14:16:32 -- common/autotest_common.sh@10 -- # set +x 00:05:53.823 ************************************ 00:05:53.823 START TEST skip_rpc 00:05:53.823 ************************************ 00:05:53.823 14:16:32 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:53.823 * Looking for test storage... 00:05:53.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:53.823 14:16:32 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.823 14:16:32 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.823 14:16:32 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.823 14:16:32 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.823 14:16:32 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:53.823 14:16:32 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.823 14:16:32 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.823 --rc genhtml_branch_coverage=1 00:05:53.823 --rc genhtml_function_coverage=1 00:05:53.823 --rc genhtml_legend=1 00:05:53.823 --rc geninfo_all_blocks=1 00:05:53.823 --rc geninfo_unexecuted_blocks=1 00:05:53.823 00:05:53.823 ' 00:05:53.823 14:16:32 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.823 --rc genhtml_branch_coverage=1 00:05:53.823 --rc genhtml_function_coverage=1 00:05:53.823 --rc genhtml_legend=1 00:05:53.823 --rc geninfo_all_blocks=1 00:05:53.823 --rc geninfo_unexecuted_blocks=1 00:05:53.823 00:05:53.823 ' 00:05:53.823 14:16:32 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.823 --rc genhtml_branch_coverage=1 00:05:53.823 --rc genhtml_function_coverage=1 00:05:53.823 --rc genhtml_legend=1 00:05:53.823 --rc geninfo_all_blocks=1 00:05:53.823 --rc geninfo_unexecuted_blocks=1 00:05:53.823 00:05:53.823 ' 00:05:53.823 14:16:32 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.823 --rc genhtml_branch_coverage=1 00:05:53.823 --rc genhtml_function_coverage=1 00:05:53.823 --rc genhtml_legend=1 00:05:53.823 --rc geninfo_all_blocks=1 00:05:53.823 --rc geninfo_unexecuted_blocks=1 00:05:53.823 00:05:53.823 ' 00:05:53.823 14:16:32 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:53.823 14:16:32 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:53.823 14:16:32 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:53.823 14:16:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.823 14:16:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.823 14:16:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.823 ************************************ 00:05:53.823 START TEST skip_rpc 00:05:53.823 ************************************ 00:05:53.823 14:16:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:53.823 14:16:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57112 00:05:53.823 14:16:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:53.823 14:16:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.823 14:16:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:53.823 [2024-11-20 14:16:32.645780] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:05:53.823 [2024-11-20 14:16:32.646003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57112 ] 00:05:54.082 [2024-11-20 14:16:32.841091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.082 [2024-11-20 14:16:33.013117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57112 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57112 ']' 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57112 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57112 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.394 killing process with pid 57112 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57112' 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57112 00:05:59.394 14:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57112 00:06:01.291 00:06:01.291 real 0m7.259s 00:06:01.291 user 0m6.618s 00:06:01.291 sys 0m0.515s 00:06:01.291 14:16:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.291 14:16:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.291 ************************************ 00:06:01.291 END TEST skip_rpc 00:06:01.291 ************************************ 00:06:01.291 14:16:39 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:01.291 14:16:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.291 14:16:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.291 14:16:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.291 ************************************ 00:06:01.291 START TEST skip_rpc_with_json 00:06:01.291 ************************************ 00:06:01.291 14:16:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:01.291 14:16:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:01.291 14:16:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57217 00:06:01.291 14:16:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.291 14:16:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.291 14:16:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57217 00:06:01.291 14:16:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57217 ']' 00:06:01.291 14:16:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.291 14:16:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.291 14:16:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.291 14:16:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.292 14:16:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:01.292 [2024-11-20 14:16:39.931222] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:01.292 [2024-11-20 14:16:39.931413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57217 ] 00:06:01.292 [2024-11-20 14:16:40.115619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.292 [2024-11-20 14:16:40.254673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.233 14:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.233 14:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:02.233 14:16:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:02.233 14:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.233 14:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.233 [2024-11-20 14:16:41.155545] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:02.233 request: 00:06:02.233 { 00:06:02.233 "trtype": "tcp", 00:06:02.234 "method": "nvmf_get_transports", 00:06:02.234 "req_id": 1 00:06:02.234 } 00:06:02.234 Got JSON-RPC error response 00:06:02.234 response: 00:06:02.234 { 00:06:02.234 "code": -19, 00:06:02.234 "message": "No such device" 00:06:02.234 } 00:06:02.234 14:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:02.234 14:16:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:02.234 14:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.234 14:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.234 [2024-11-20 14:16:41.167761] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:02.234 14:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.234 14:16:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:02.234 14:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.234 14:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.492 14:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.492 14:16:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:02.492 { 00:06:02.492 "subsystems": [ 00:06:02.492 { 00:06:02.492 "subsystem": "fsdev", 00:06:02.492 "config": [ 00:06:02.492 { 00:06:02.492 "method": "fsdev_set_opts", 00:06:02.492 "params": { 00:06:02.492 "fsdev_io_pool_size": 65535, 00:06:02.492 "fsdev_io_cache_size": 256 00:06:02.492 } 00:06:02.492 } 00:06:02.492 ] 00:06:02.492 }, 00:06:02.492 { 00:06:02.492 "subsystem": "keyring", 00:06:02.492 "config": [] 00:06:02.492 }, 00:06:02.492 { 00:06:02.492 "subsystem": "iobuf", 00:06:02.492 "config": [ 00:06:02.492 { 00:06:02.492 "method": "iobuf_set_options", 00:06:02.492 "params": { 00:06:02.492 "small_pool_count": 8192, 00:06:02.492 "large_pool_count": 1024, 00:06:02.492 "small_bufsize": 8192, 00:06:02.492 "large_bufsize": 135168, 00:06:02.492 "enable_numa": false 00:06:02.492 } 00:06:02.492 } 00:06:02.492 ] 00:06:02.492 }, 00:06:02.492 { 00:06:02.492 "subsystem": "sock", 00:06:02.492 "config": [ 00:06:02.492 { 00:06:02.492 "method": "sock_set_default_impl", 00:06:02.492 "params": { 00:06:02.492 "impl_name": "posix" 00:06:02.492 } 00:06:02.492 }, 00:06:02.492 { 00:06:02.492 "method": "sock_impl_set_options", 00:06:02.492 "params": { 00:06:02.492 "impl_name": "ssl", 00:06:02.492 "recv_buf_size": 4096, 00:06:02.492 "send_buf_size": 4096, 00:06:02.492 "enable_recv_pipe": true, 00:06:02.492 "enable_quickack": false, 00:06:02.492 "enable_placement_id": 0, 00:06:02.492 "enable_zerocopy_send_server": true, 00:06:02.492 "enable_zerocopy_send_client": false, 00:06:02.492 "zerocopy_threshold": 0, 00:06:02.492 "tls_version": 0, 00:06:02.492 "enable_ktls": false 00:06:02.492 } 00:06:02.492 }, 00:06:02.492 { 00:06:02.492 "method": "sock_impl_set_options", 00:06:02.492 "params": { 00:06:02.492 "impl_name": "posix", 00:06:02.492 "recv_buf_size": 2097152, 00:06:02.492 "send_buf_size": 2097152, 00:06:02.492 "enable_recv_pipe": true, 00:06:02.492 "enable_quickack": false, 00:06:02.492 "enable_placement_id": 0, 00:06:02.492 "enable_zerocopy_send_server": true, 00:06:02.492 "enable_zerocopy_send_client": false, 00:06:02.492 "zerocopy_threshold": 0, 00:06:02.492 "tls_version": 0, 00:06:02.492 "enable_ktls": false 00:06:02.492 } 00:06:02.492 } 00:06:02.492 ] 00:06:02.492 }, 00:06:02.492 { 00:06:02.492 "subsystem": "vmd", 00:06:02.492 "config": [] 00:06:02.492 }, 00:06:02.492 { 00:06:02.492 "subsystem": "accel", 00:06:02.492 "config": [ 00:06:02.492 { 00:06:02.492 "method": "accel_set_options", 00:06:02.492 "params": { 00:06:02.492 "small_cache_size": 128, 00:06:02.492 "large_cache_size": 16, 00:06:02.492 "task_count": 2048, 00:06:02.492 "sequence_count": 2048, 00:06:02.492 "buf_count": 2048 00:06:02.492 } 00:06:02.492 } 00:06:02.493 ] 00:06:02.493 }, 00:06:02.493 { 00:06:02.493 "subsystem": "bdev", 00:06:02.493 "config": [ 00:06:02.493 { 00:06:02.493 "method": "bdev_set_options", 00:06:02.493 "params": { 00:06:02.493 "bdev_io_pool_size": 65535, 00:06:02.493 "bdev_io_cache_size": 256, 00:06:02.493 "bdev_auto_examine": true, 00:06:02.493 "iobuf_small_cache_size": 128, 00:06:02.493 "iobuf_large_cache_size": 16 00:06:02.493 } 00:06:02.493 }, 00:06:02.493 { 00:06:02.493 "method": "bdev_raid_set_options", 00:06:02.493 "params": { 00:06:02.493 "process_window_size_kb": 1024, 00:06:02.493 "process_max_bandwidth_mb_sec": 0 00:06:02.493 } 00:06:02.493 }, 00:06:02.493 { 00:06:02.493 "method": "bdev_iscsi_set_options", 00:06:02.493 "params": { 00:06:02.493 "timeout_sec": 30 00:06:02.493 } 00:06:02.493 }, 00:06:02.493 { 00:06:02.493 "method": "bdev_nvme_set_options", 00:06:02.493 "params": { 00:06:02.493 "action_on_timeout": "none", 00:06:02.493 "timeout_us": 0, 00:06:02.493 "timeout_admin_us": 0, 00:06:02.493 "keep_alive_timeout_ms": 10000, 00:06:02.493 "arbitration_burst": 0, 00:06:02.493 "low_priority_weight": 0, 00:06:02.493 "medium_priority_weight": 0, 00:06:02.493 "high_priority_weight": 0, 00:06:02.493 "nvme_adminq_poll_period_us": 10000, 00:06:02.493 "nvme_ioq_poll_period_us": 0, 00:06:02.493 "io_queue_requests": 0, 00:06:02.493 "delay_cmd_submit": true, 00:06:02.493 "transport_retry_count": 4, 00:06:02.493 "bdev_retry_count": 3, 00:06:02.493 "transport_ack_timeout": 0, 00:06:02.493 "ctrlr_loss_timeout_sec": 0, 00:06:02.493 "reconnect_delay_sec": 0, 00:06:02.493 "fast_io_fail_timeout_sec": 0, 00:06:02.493 "disable_auto_failback": false, 00:06:02.493 "generate_uuids": false, 00:06:02.493 "transport_tos": 0, 00:06:02.493 "nvme_error_stat": false, 00:06:02.493 "rdma_srq_size": 0, 00:06:02.493 "io_path_stat": false, 00:06:02.493 "allow_accel_sequence": false, 00:06:02.493 "rdma_max_cq_size": 0, 00:06:02.493 "rdma_cm_event_timeout_ms": 0, 00:06:02.493 "dhchap_digests": [ 00:06:02.493 "sha256", 00:06:02.493 "sha384", 00:06:02.493 "sha512" 00:06:02.493 ], 00:06:02.493 "dhchap_dhgroups": [ 00:06:02.493 "null", 00:06:02.493 "ffdhe2048", 00:06:02.493 "ffdhe3072", 00:06:02.493 "ffdhe4096", 00:06:02.493 "ffdhe6144", 00:06:02.493 "ffdhe8192" 00:06:02.493 ] 00:06:02.493 } 00:06:02.493 }, 00:06:02.493 { 00:06:02.493 "method": "bdev_nvme_set_hotplug", 00:06:02.493 "params": { 00:06:02.493 "period_us": 100000, 00:06:02.493 "enable": false 00:06:02.493 } 00:06:02.493 }, 00:06:02.493 { 00:06:02.493 "method": "bdev_wait_for_examine" 00:06:02.493 } 00:06:02.493 ] 00:06:02.493 }, 00:06:02.493 { 00:06:02.493 "subsystem": "scsi", 00:06:02.493 "config": null 00:06:02.493 }, 00:06:02.493 { 00:06:02.493 "subsystem": "scheduler", 00:06:02.493 "config": [ 00:06:02.493 { 00:06:02.493 "method": "framework_set_scheduler", 00:06:02.493 "params": { 00:06:02.493 "name": "static" 00:06:02.493 } 00:06:02.493 } 00:06:02.493 ] 00:06:02.493 }, 00:06:02.493 { 00:06:02.493 "subsystem": "vhost_scsi", 00:06:02.493 "config": [] 00:06:02.493 }, 00:06:02.493 { 00:06:02.493 "subsystem": "vhost_blk", 00:06:02.493 "config": [] 00:06:02.493 }, 00:06:02.493 { 00:06:02.493 "subsystem": "ublk", 00:06:02.493 "config": [] 00:06:02.493 }, 00:06:02.493 { 00:06:02.493 "subsystem": "nbd", 00:06:02.493 "config": [] 00:06:02.493 }, 00:06:02.493 { 00:06:02.493 "subsystem": "nvmf", 00:06:02.493 "config": [ 00:06:02.493 { 00:06:02.493 "method": "nvmf_set_config", 00:06:02.493 "params": { 00:06:02.493 "discovery_filter": "match_any", 00:06:02.493 "admin_cmd_passthru": { 00:06:02.493 "identify_ctrlr": false 00:06:02.493 }, 00:06:02.493 "dhchap_digests": [ 00:06:02.493 "sha256", 00:06:02.493 "sha384", 00:06:02.493 "sha512" 00:06:02.493 ], 00:06:02.493 "dhchap_dhgroups": [ 00:06:02.493 "null", 00:06:02.493 "ffdhe2048", 00:06:02.493 "ffdhe3072", 00:06:02.493 "ffdhe4096", 00:06:02.493 "ffdhe6144", 00:06:02.493 "ffdhe8192" 00:06:02.493 ] 00:06:02.493 } 00:06:02.493 }, 00:06:02.493 { 00:06:02.493 "method": "nvmf_set_max_subsystems", 00:06:02.493 "params": { 00:06:02.493 "max_subsystems": 1024 00:06:02.493 } 00:06:02.493 }, 00:06:02.493 { 00:06:02.493 "method": "nvmf_set_crdt", 00:06:02.493 "params": { 00:06:02.493 "crdt1": 0, 00:06:02.493 "crdt2": 0, 00:06:02.493 "crdt3": 0 00:06:02.493 } 00:06:02.493 }, 00:06:02.493 { 00:06:02.493 "method": "nvmf_create_transport", 00:06:02.493 "params": { 00:06:02.493 "trtype": "TCP", 00:06:02.493 "max_queue_depth": 128, 00:06:02.493 "max_io_qpairs_per_ctrlr": 127, 00:06:02.493 "in_capsule_data_size": 4096, 00:06:02.493 "max_io_size": 131072, 00:06:02.493 "io_unit_size": 131072, 00:06:02.493 "max_aq_depth": 128, 00:06:02.493 "num_shared_buffers": 511, 00:06:02.493 "buf_cache_size": 4294967295, 00:06:02.493 "dif_insert_or_strip": false, 00:06:02.493 "zcopy": false, 00:06:02.493 "c2h_success": true, 00:06:02.493 "sock_priority": 0, 00:06:02.493 "abort_timeout_sec": 1, 00:06:02.493 "ack_timeout": 0, 00:06:02.493 "data_wr_pool_size": 0 00:06:02.493 } 00:06:02.493 } 00:06:02.493 ] 00:06:02.493 }, 00:06:02.493 { 00:06:02.493 "subsystem": "iscsi", 00:06:02.493 "config": [ 00:06:02.493 { 00:06:02.493 "method": "iscsi_set_options", 00:06:02.493 "params": { 00:06:02.493 "node_base": "iqn.2016-06.io.spdk", 00:06:02.493 "max_sessions": 128, 00:06:02.493 "max_connections_per_session": 2, 00:06:02.493 "max_queue_depth": 64, 00:06:02.493 "default_time2wait": 2, 00:06:02.493 "default_time2retain": 20, 00:06:02.493 "first_burst_length": 8192, 00:06:02.493 "immediate_data": true, 00:06:02.493 "allow_duplicated_isid": false, 00:06:02.493 "error_recovery_level": 0, 00:06:02.493 "nop_timeout": 60, 00:06:02.493 "nop_in_interval": 30, 00:06:02.493 "disable_chap": false, 00:06:02.493 "require_chap": false, 00:06:02.493 "mutual_chap": false, 00:06:02.493 "chap_group": 0, 00:06:02.493 "max_large_datain_per_connection": 64, 00:06:02.493 "max_r2t_per_connection": 4, 00:06:02.493 "pdu_pool_size": 36864, 00:06:02.493 "immediate_data_pool_size": 16384, 00:06:02.493 "data_out_pool_size": 2048 00:06:02.493 } 00:06:02.493 } 00:06:02.493 ] 00:06:02.493 } 00:06:02.493 ] 00:06:02.493 } 00:06:02.493 14:16:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:02.493 14:16:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57217 00:06:02.493 14:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57217 ']' 00:06:02.493 14:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57217 00:06:02.493 14:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:02.493 14:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.493 14:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57217 00:06:02.493 killing process with pid 57217 00:06:02.493 14:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.493 14:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.493 14:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57217' 00:06:02.494 14:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57217 00:06:02.494 14:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57217 00:06:05.024 14:16:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57268 00:06:05.024 14:16:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:05.024 14:16:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:10.303 14:16:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57268 00:06:10.303 14:16:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57268 ']' 00:06:10.303 14:16:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57268 00:06:10.303 14:16:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:10.303 14:16:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.303 14:16:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57268 00:06:10.303 14:16:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.303 killing process with pid 57268 00:06:10.303 14:16:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.303 14:16:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57268' 00:06:10.304 14:16:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57268 00:06:10.304 14:16:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57268 00:06:12.287 14:16:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:12.287 14:16:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:12.287 00:06:12.287 real 0m11.381s 00:06:12.287 user 0m10.722s 00:06:12.287 sys 0m1.096s 00:06:12.287 ************************************ 00:06:12.287 END TEST skip_rpc_with_json 00:06:12.287 ************************************ 00:06:12.287 14:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.287 14:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.287 14:16:51 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:12.287 14:16:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.287 14:16:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.287 14:16:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.287 ************************************ 00:06:12.287 START TEST skip_rpc_with_delay 00:06:12.287 ************************************ 00:06:12.287 14:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:12.287 14:16:51 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:12.287 14:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:12.287 14:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:12.287 14:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.287 14:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.287 14:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.287 14:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.287 14:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.287 14:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.287 14:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.287 14:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:12.287 14:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:12.545 [2024-11-20 14:16:51.362881] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:12.545 14:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:12.545 14:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:12.545 14:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:12.545 14:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:12.545 00:06:12.545 real 0m0.198s 00:06:12.545 user 0m0.098s 00:06:12.545 sys 0m0.096s 00:06:12.545 14:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.545 ************************************ 00:06:12.545 END TEST skip_rpc_with_delay 00:06:12.545 ************************************ 00:06:12.546 14:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:12.546 14:16:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:12.546 14:16:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:12.546 14:16:51 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:12.546 14:16:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.546 14:16:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.546 14:16:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.546 ************************************ 00:06:12.546 START TEST exit_on_failed_rpc_init 00:06:12.546 ************************************ 00:06:12.546 14:16:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:12.546 14:16:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57407 00:06:12.546 14:16:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.546 14:16:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57407 00:06:12.546 14:16:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57407 ']' 00:06:12.546 14:16:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.546 14:16:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.546 14:16:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.546 14:16:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.546 14:16:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:12.804 [2024-11-20 14:16:51.605227] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:12.804 [2024-11-20 14:16:51.605472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57407 ] 00:06:13.062 [2024-11-20 14:16:51.803677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.062 [2024-11-20 14:16:51.935730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.997 14:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.998 14:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:13.998 14:16:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.998 14:16:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:13.998 14:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:13.998 14:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:13.998 14:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:13.998 14:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.998 14:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:13.998 14:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.998 14:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:13.998 14:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.998 14:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:13.998 14:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:13.998 14:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:13.998 [2024-11-20 14:16:52.975751] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:13.998 [2024-11-20 14:16:52.975948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57425 ] 00:06:14.257 [2024-11-20 14:16:53.173469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.515 [2024-11-20 14:16:53.363275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.515 [2024-11-20 14:16:53.363427] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:14.515 [2024-11-20 14:16:53.363454] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:14.515 [2024-11-20 14:16:53.363474] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:14.774 14:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:14.774 14:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.774 14:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:14.774 14:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:14.774 14:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:14.774 14:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.774 14:16:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:14.774 14:16:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57407 00:06:14.774 14:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57407 ']' 00:06:14.774 14:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57407 00:06:14.774 14:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:14.774 14:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.774 14:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57407 00:06:14.774 14:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.774 killing process with pid 57407 00:06:14.774 14:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.774 14:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57407' 00:06:14.774 14:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57407 00:06:14.774 14:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57407 00:06:18.056 00:06:18.056 real 0m4.853s 00:06:18.056 user 0m5.393s 00:06:18.056 sys 0m0.663s 00:06:18.056 14:16:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.056 14:16:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:18.056 ************************************ 00:06:18.056 END TEST exit_on_failed_rpc_init 00:06:18.056 ************************************ 00:06:18.056 14:16:56 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:18.056 00:06:18.056 real 0m24.081s 00:06:18.056 user 0m23.019s 00:06:18.056 sys 0m2.574s 00:06:18.056 14:16:56 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.056 ************************************ 00:06:18.056 END TEST skip_rpc 00:06:18.056 ************************************ 00:06:18.056 14:16:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.056 14:16:56 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:18.056 14:16:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.056 14:16:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.056 14:16:56 -- common/autotest_common.sh@10 -- # set +x 00:06:18.056 ************************************ 00:06:18.056 START TEST rpc_client 00:06:18.056 ************************************ 00:06:18.056 14:16:56 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:18.056 * Looking for test storage... 00:06:18.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:18.056 14:16:56 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.056 14:16:56 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.056 14:16:56 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.056 14:16:56 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.056 14:16:56 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.056 14:16:56 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.056 14:16:56 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.056 14:16:56 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.056 14:16:56 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.056 14:16:56 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.056 14:16:56 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.056 14:16:56 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.056 14:16:56 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.056 14:16:56 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.056 14:16:56 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.056 14:16:56 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:18.057 14:16:56 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:18.057 14:16:56 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.057 14:16:56 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.057 14:16:56 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:18.057 14:16:56 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:18.057 14:16:56 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.057 14:16:56 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:18.057 14:16:56 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.057 14:16:56 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:18.057 14:16:56 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:18.057 14:16:56 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.057 14:16:56 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:18.057 14:16:56 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.057 14:16:56 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.057 14:16:56 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.057 14:16:56 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:18.057 14:16:56 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.057 14:16:56 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.057 --rc genhtml_branch_coverage=1 00:06:18.057 --rc genhtml_function_coverage=1 00:06:18.057 --rc genhtml_legend=1 00:06:18.057 --rc geninfo_all_blocks=1 00:06:18.057 --rc geninfo_unexecuted_blocks=1 00:06:18.057 00:06:18.057 ' 00:06:18.057 14:16:56 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.057 --rc genhtml_branch_coverage=1 00:06:18.057 --rc genhtml_function_coverage=1 00:06:18.057 --rc genhtml_legend=1 00:06:18.057 --rc geninfo_all_blocks=1 00:06:18.057 --rc geninfo_unexecuted_blocks=1 00:06:18.057 00:06:18.057 ' 00:06:18.057 14:16:56 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.057 --rc genhtml_branch_coverage=1 00:06:18.057 --rc genhtml_function_coverage=1 00:06:18.057 --rc genhtml_legend=1 00:06:18.057 --rc geninfo_all_blocks=1 00:06:18.057 --rc geninfo_unexecuted_blocks=1 00:06:18.057 00:06:18.057 ' 00:06:18.057 14:16:56 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.057 --rc genhtml_branch_coverage=1 00:06:18.057 --rc genhtml_function_coverage=1 00:06:18.057 --rc genhtml_legend=1 00:06:18.057 --rc geninfo_all_blocks=1 00:06:18.057 --rc geninfo_unexecuted_blocks=1 00:06:18.057 00:06:18.057 ' 00:06:18.057 14:16:56 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:18.057 OK 00:06:18.057 14:16:56 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:18.057 00:06:18.057 real 0m0.234s 00:06:18.057 user 0m0.137s 00:06:18.057 sys 0m0.107s 00:06:18.057 14:16:56 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.057 14:16:56 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:18.057 ************************************ 00:06:18.057 END TEST rpc_client 00:06:18.057 ************************************ 00:06:18.057 14:16:56 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:18.057 14:16:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.057 14:16:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.057 14:16:56 -- common/autotest_common.sh@10 -- # set +x 00:06:18.057 ************************************ 00:06:18.057 START TEST json_config 00:06:18.057 ************************************ 00:06:18.057 14:16:56 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:18.057 14:16:56 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.057 14:16:56 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.057 14:16:56 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.057 14:16:56 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.057 14:16:56 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.057 14:16:56 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.057 14:16:56 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.057 14:16:56 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.057 14:16:56 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.057 14:16:56 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.057 14:16:56 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.057 14:16:56 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.057 14:16:56 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.057 14:16:56 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.057 14:16:56 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.057 14:16:56 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:18.057 14:16:56 json_config -- scripts/common.sh@345 -- # : 1 00:06:18.057 14:16:56 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.057 14:16:56 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.057 14:16:56 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:18.057 14:16:56 json_config -- scripts/common.sh@353 -- # local d=1 00:06:18.057 14:16:56 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.057 14:16:56 json_config -- scripts/common.sh@355 -- # echo 1 00:06:18.057 14:16:56 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.057 14:16:56 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:18.057 14:16:56 json_config -- scripts/common.sh@353 -- # local d=2 00:06:18.057 14:16:56 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.057 14:16:56 json_config -- scripts/common.sh@355 -- # echo 2 00:06:18.057 14:16:56 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.057 14:16:56 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.057 14:16:56 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.057 14:16:56 json_config -- scripts/common.sh@368 -- # return 0 00:06:18.057 14:16:56 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.057 14:16:56 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.057 --rc genhtml_branch_coverage=1 00:06:18.057 --rc genhtml_function_coverage=1 00:06:18.057 --rc genhtml_legend=1 00:06:18.057 --rc geninfo_all_blocks=1 00:06:18.057 --rc geninfo_unexecuted_blocks=1 00:06:18.057 00:06:18.057 ' 00:06:18.057 14:16:56 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.057 --rc genhtml_branch_coverage=1 00:06:18.057 --rc genhtml_function_coverage=1 00:06:18.057 --rc genhtml_legend=1 00:06:18.057 --rc geninfo_all_blocks=1 00:06:18.057 --rc geninfo_unexecuted_blocks=1 00:06:18.057 00:06:18.057 ' 00:06:18.057 14:16:56 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.057 --rc genhtml_branch_coverage=1 00:06:18.057 --rc genhtml_function_coverage=1 00:06:18.057 --rc genhtml_legend=1 00:06:18.057 --rc geninfo_all_blocks=1 00:06:18.057 --rc geninfo_unexecuted_blocks=1 00:06:18.057 00:06:18.057 ' 00:06:18.057 14:16:56 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.057 --rc genhtml_branch_coverage=1 00:06:18.057 --rc genhtml_function_coverage=1 00:06:18.057 --rc genhtml_legend=1 00:06:18.057 --rc geninfo_all_blocks=1 00:06:18.057 --rc geninfo_unexecuted_blocks=1 00:06:18.057 00:06:18.057 ' 00:06:18.057 14:16:56 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:18.057 14:16:56 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:18.057 14:16:56 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.057 14:16:56 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.057 14:16:56 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.057 14:16:56 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.057 14:16:56 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.057 14:16:56 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.057 14:16:56 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.057 14:16:56 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.057 14:16:56 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.057 14:16:56 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.057 14:16:56 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:db59ceda-2696-4653-8c92-acb430fd34b6 00:06:18.057 14:16:56 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=db59ceda-2696-4653-8c92-acb430fd34b6 00:06:18.057 14:16:56 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.057 14:16:56 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.057 14:16:56 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:18.057 14:16:56 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:18.057 14:16:56 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:18.057 14:16:56 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:18.057 14:16:56 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.057 14:16:56 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.057 14:16:56 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.057 14:16:56 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.058 14:16:56 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.058 14:16:56 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.058 14:16:56 json_config -- paths/export.sh@5 -- # export PATH 00:06:18.058 14:16:56 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.058 14:16:56 json_config -- nvmf/common.sh@51 -- # : 0 00:06:18.058 14:16:56 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:18.058 14:16:56 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:18.058 14:16:56 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:18.058 14:16:56 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.058 14:16:56 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.058 14:16:56 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:18.058 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:18.058 14:16:56 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:18.058 14:16:56 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:18.058 14:16:56 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:18.058 14:16:56 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:18.058 14:16:56 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:18.058 14:16:56 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:18.058 14:16:56 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:18.058 14:16:56 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:18.058 WARNING: No tests are enabled so not running JSON configuration tests 00:06:18.058 14:16:56 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:18.058 14:16:56 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:18.058 00:06:18.058 real 0m0.166s 00:06:18.058 user 0m0.105s 00:06:18.058 sys 0m0.070s 00:06:18.058 14:16:56 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.058 14:16:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.058 ************************************ 00:06:18.058 END TEST json_config 00:06:18.058 ************************************ 00:06:18.058 14:16:56 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:18.058 14:16:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.058 14:16:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.058 14:16:56 -- common/autotest_common.sh@10 -- # set +x 00:06:18.058 ************************************ 00:06:18.058 START TEST json_config_extra_key 00:06:18.058 ************************************ 00:06:18.058 14:16:56 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:18.058 14:16:56 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.058 14:16:56 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.058 14:16:56 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.318 14:16:57 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:18.318 14:16:57 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.318 14:16:57 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.318 --rc genhtml_branch_coverage=1 00:06:18.318 --rc genhtml_function_coverage=1 00:06:18.318 --rc genhtml_legend=1 00:06:18.318 --rc geninfo_all_blocks=1 00:06:18.318 --rc geninfo_unexecuted_blocks=1 00:06:18.318 00:06:18.318 ' 00:06:18.318 14:16:57 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.318 --rc genhtml_branch_coverage=1 00:06:18.318 --rc genhtml_function_coverage=1 00:06:18.318 --rc genhtml_legend=1 00:06:18.318 --rc geninfo_all_blocks=1 00:06:18.318 --rc geninfo_unexecuted_blocks=1 00:06:18.318 00:06:18.318 ' 00:06:18.318 14:16:57 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.318 --rc genhtml_branch_coverage=1 00:06:18.318 --rc genhtml_function_coverage=1 00:06:18.318 --rc genhtml_legend=1 00:06:18.318 --rc geninfo_all_blocks=1 00:06:18.318 --rc geninfo_unexecuted_blocks=1 00:06:18.318 00:06:18.318 ' 00:06:18.318 14:16:57 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.318 --rc genhtml_branch_coverage=1 00:06:18.318 --rc genhtml_function_coverage=1 00:06:18.318 --rc genhtml_legend=1 00:06:18.318 --rc geninfo_all_blocks=1 00:06:18.318 --rc geninfo_unexecuted_blocks=1 00:06:18.318 00:06:18.318 ' 00:06:18.318 14:16:57 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:18.318 14:16:57 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:18.318 14:16:57 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.318 14:16:57 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.318 14:16:57 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.318 14:16:57 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.318 14:16:57 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.318 14:16:57 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.318 14:16:57 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.318 14:16:57 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.318 14:16:57 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.318 14:16:57 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.318 14:16:57 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:db59ceda-2696-4653-8c92-acb430fd34b6 00:06:18.318 14:16:57 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=db59ceda-2696-4653-8c92-acb430fd34b6 00:06:18.318 14:16:57 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.318 14:16:57 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.318 14:16:57 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:18.318 14:16:57 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:18.318 14:16:57 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.318 14:16:57 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.318 14:16:57 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.318 14:16:57 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.319 14:16:57 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.319 14:16:57 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:18.319 14:16:57 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.319 14:16:57 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:18.319 14:16:57 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:18.319 14:16:57 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:18.319 14:16:57 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:18.319 14:16:57 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.319 14:16:57 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.319 14:16:57 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:18.319 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:18.319 14:16:57 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:18.319 14:16:57 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:18.319 14:16:57 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:18.319 14:16:57 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:18.319 14:16:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:18.319 14:16:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:18.319 14:16:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:18.319 14:16:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:18.319 14:16:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:18.319 14:16:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:18.319 14:16:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:18.319 14:16:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:18.319 14:16:57 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:18.319 INFO: launching applications... 00:06:18.319 14:16:57 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:18.319 14:16:57 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:18.319 14:16:57 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:18.319 14:16:57 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:18.319 14:16:57 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:18.319 14:16:57 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:18.319 14:16:57 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:18.319 14:16:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:18.319 14:16:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:18.319 14:16:57 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57635 00:06:18.319 Waiting for target to run... 00:06:18.319 14:16:57 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:18.319 14:16:57 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57635 /var/tmp/spdk_tgt.sock 00:06:18.319 14:16:57 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57635 ']' 00:06:18.319 14:16:57 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:18.319 14:16:57 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:18.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:18.319 14:16:57 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.319 14:16:57 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:18.319 14:16:57 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.319 14:16:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:18.319 [2024-11-20 14:16:57.190263] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:18.319 [2024-11-20 14:16:57.190448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57635 ] 00:06:18.887 [2024-11-20 14:16:57.652170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.887 [2024-11-20 14:16:57.776004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.823 14:16:58 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.823 00:06:19.823 14:16:58 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:19.823 14:16:58 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:19.823 INFO: shutting down applications... 00:06:19.823 14:16:58 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:19.823 14:16:58 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:19.823 14:16:58 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:19.823 14:16:58 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:19.823 14:16:58 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57635 ]] 00:06:19.823 14:16:58 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57635 00:06:19.823 14:16:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:19.823 14:16:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.823 14:16:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57635 00:06:19.823 14:16:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:20.082 14:16:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:20.082 14:16:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.082 14:16:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57635 00:06:20.082 14:16:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:20.650 14:16:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:20.650 14:16:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.650 14:16:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57635 00:06:20.650 14:16:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:21.218 14:16:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:21.218 14:16:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.218 14:16:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57635 00:06:21.218 14:16:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:21.784 14:17:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:21.784 14:17:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.784 14:17:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57635 00:06:21.785 14:17:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:22.042 14:17:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:22.042 14:17:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:22.042 14:17:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57635 00:06:22.042 14:17:01 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:22.042 14:17:01 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:22.042 14:17:01 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:22.042 SPDK target shutdown done 00:06:22.042 14:17:01 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:22.042 Success 00:06:22.042 14:17:01 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:22.042 00:06:22.042 real 0m4.095s 00:06:22.042 user 0m3.877s 00:06:22.042 sys 0m0.621s 00:06:22.042 14:17:01 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.042 ************************************ 00:06:22.043 END TEST json_config_extra_key 00:06:22.043 ************************************ 00:06:22.043 14:17:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:22.300 14:17:01 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:22.300 14:17:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.300 14:17:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.300 14:17:01 -- common/autotest_common.sh@10 -- # set +x 00:06:22.300 ************************************ 00:06:22.300 START TEST alias_rpc 00:06:22.300 ************************************ 00:06:22.300 14:17:01 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:22.300 * Looking for test storage... 00:06:22.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:22.300 14:17:01 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:22.300 14:17:01 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:22.300 14:17:01 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:22.300 14:17:01 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.300 14:17:01 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:22.300 14:17:01 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.300 14:17:01 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:22.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.300 --rc genhtml_branch_coverage=1 00:06:22.300 --rc genhtml_function_coverage=1 00:06:22.300 --rc genhtml_legend=1 00:06:22.300 --rc geninfo_all_blocks=1 00:06:22.300 --rc geninfo_unexecuted_blocks=1 00:06:22.300 00:06:22.300 ' 00:06:22.300 14:17:01 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:22.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.300 --rc genhtml_branch_coverage=1 00:06:22.300 --rc genhtml_function_coverage=1 00:06:22.300 --rc genhtml_legend=1 00:06:22.300 --rc geninfo_all_blocks=1 00:06:22.300 --rc geninfo_unexecuted_blocks=1 00:06:22.300 00:06:22.300 ' 00:06:22.300 14:17:01 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:22.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.300 --rc genhtml_branch_coverage=1 00:06:22.300 --rc genhtml_function_coverage=1 00:06:22.300 --rc genhtml_legend=1 00:06:22.300 --rc geninfo_all_blocks=1 00:06:22.301 --rc geninfo_unexecuted_blocks=1 00:06:22.301 00:06:22.301 ' 00:06:22.301 14:17:01 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:22.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.301 --rc genhtml_branch_coverage=1 00:06:22.301 --rc genhtml_function_coverage=1 00:06:22.301 --rc genhtml_legend=1 00:06:22.301 --rc geninfo_all_blocks=1 00:06:22.301 --rc geninfo_unexecuted_blocks=1 00:06:22.301 00:06:22.301 ' 00:06:22.301 14:17:01 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:22.301 14:17:01 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57740 00:06:22.301 14:17:01 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:22.301 14:17:01 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57740 00:06:22.301 14:17:01 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57740 ']' 00:06:22.301 14:17:01 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.301 14:17:01 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.301 14:17:01 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.301 14:17:01 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.301 14:17:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.557 [2024-11-20 14:17:01.361616] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:22.558 [2024-11-20 14:17:01.361825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57740 ] 00:06:22.815 [2024-11-20 14:17:01.546152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.815 [2024-11-20 14:17:01.676408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.751 14:17:02 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.751 14:17:02 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:23.751 14:17:02 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:24.009 14:17:02 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57740 00:06:24.009 14:17:02 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57740 ']' 00:06:24.009 14:17:02 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57740 00:06:24.009 14:17:02 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:24.009 14:17:02 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.009 14:17:02 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57740 00:06:24.009 14:17:02 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.009 killing process with pid 57740 00:06:24.009 14:17:02 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.009 14:17:02 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57740' 00:06:24.009 14:17:02 alias_rpc -- common/autotest_common.sh@973 -- # kill 57740 00:06:24.009 14:17:02 alias_rpc -- common/autotest_common.sh@978 -- # wait 57740 00:06:26.541 ************************************ 00:06:26.541 END TEST alias_rpc 00:06:26.541 ************************************ 00:06:26.541 00:06:26.541 real 0m4.081s 00:06:26.541 user 0m4.275s 00:06:26.541 sys 0m0.598s 00:06:26.541 14:17:05 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.541 14:17:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.541 14:17:05 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:26.541 14:17:05 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:26.541 14:17:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.541 14:17:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.541 14:17:05 -- common/autotest_common.sh@10 -- # set +x 00:06:26.541 ************************************ 00:06:26.541 START TEST spdkcli_tcp 00:06:26.541 ************************************ 00:06:26.541 14:17:05 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:26.541 * Looking for test storage... 00:06:26.541 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:26.541 14:17:05 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:26.541 14:17:05 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:26.541 14:17:05 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:26.541 14:17:05 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.541 14:17:05 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:26.541 14:17:05 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.541 14:17:05 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:26.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.541 --rc genhtml_branch_coverage=1 00:06:26.541 --rc genhtml_function_coverage=1 00:06:26.541 --rc genhtml_legend=1 00:06:26.541 --rc geninfo_all_blocks=1 00:06:26.541 --rc geninfo_unexecuted_blocks=1 00:06:26.541 00:06:26.541 ' 00:06:26.541 14:17:05 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:26.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.541 --rc genhtml_branch_coverage=1 00:06:26.541 --rc genhtml_function_coverage=1 00:06:26.541 --rc genhtml_legend=1 00:06:26.541 --rc geninfo_all_blocks=1 00:06:26.541 --rc geninfo_unexecuted_blocks=1 00:06:26.541 00:06:26.541 ' 00:06:26.541 14:17:05 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:26.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.541 --rc genhtml_branch_coverage=1 00:06:26.541 --rc genhtml_function_coverage=1 00:06:26.541 --rc genhtml_legend=1 00:06:26.541 --rc geninfo_all_blocks=1 00:06:26.541 --rc geninfo_unexecuted_blocks=1 00:06:26.541 00:06:26.541 ' 00:06:26.541 14:17:05 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:26.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.541 --rc genhtml_branch_coverage=1 00:06:26.541 --rc genhtml_function_coverage=1 00:06:26.541 --rc genhtml_legend=1 00:06:26.541 --rc geninfo_all_blocks=1 00:06:26.541 --rc geninfo_unexecuted_blocks=1 00:06:26.541 00:06:26.541 ' 00:06:26.541 14:17:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:26.541 14:17:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:26.541 14:17:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:26.541 14:17:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:26.541 14:17:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:26.541 14:17:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:26.541 14:17:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:26.541 14:17:05 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:26.541 14:17:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:26.541 14:17:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57847 00:06:26.541 14:17:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57847 00:06:26.541 14:17:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:26.541 14:17:05 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57847 ']' 00:06:26.542 14:17:05 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.542 14:17:05 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.542 14:17:05 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.542 14:17:05 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.542 14:17:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:26.542 [2024-11-20 14:17:05.483730] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:26.542 [2024-11-20 14:17:05.483883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57847 ] 00:06:26.800 [2024-11-20 14:17:05.656577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.059 [2024-11-20 14:17:05.793784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.060 [2024-11-20 14:17:05.793809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.748 14:17:06 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.748 14:17:06 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:27.748 14:17:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57870 00:06:27.748 14:17:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:27.748 14:17:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:28.007 [ 00:06:28.007 "bdev_malloc_delete", 00:06:28.007 "bdev_malloc_create", 00:06:28.007 "bdev_null_resize", 00:06:28.007 "bdev_null_delete", 00:06:28.007 "bdev_null_create", 00:06:28.007 "bdev_nvme_cuse_unregister", 00:06:28.007 "bdev_nvme_cuse_register", 00:06:28.007 "bdev_opal_new_user", 00:06:28.007 "bdev_opal_set_lock_state", 00:06:28.007 "bdev_opal_delete", 00:06:28.007 "bdev_opal_get_info", 00:06:28.007 "bdev_opal_create", 00:06:28.007 "bdev_nvme_opal_revert", 00:06:28.007 "bdev_nvme_opal_init", 00:06:28.007 "bdev_nvme_send_cmd", 00:06:28.007 "bdev_nvme_set_keys", 00:06:28.007 "bdev_nvme_get_path_iostat", 00:06:28.007 "bdev_nvme_get_mdns_discovery_info", 00:06:28.007 "bdev_nvme_stop_mdns_discovery", 00:06:28.007 "bdev_nvme_start_mdns_discovery", 00:06:28.007 "bdev_nvme_set_multipath_policy", 00:06:28.007 "bdev_nvme_set_preferred_path", 00:06:28.007 "bdev_nvme_get_io_paths", 00:06:28.007 "bdev_nvme_remove_error_injection", 00:06:28.007 "bdev_nvme_add_error_injection", 00:06:28.007 "bdev_nvme_get_discovery_info", 00:06:28.007 "bdev_nvme_stop_discovery", 00:06:28.007 "bdev_nvme_start_discovery", 00:06:28.007 "bdev_nvme_get_controller_health_info", 00:06:28.007 "bdev_nvme_disable_controller", 00:06:28.007 "bdev_nvme_enable_controller", 00:06:28.007 "bdev_nvme_reset_controller", 00:06:28.007 "bdev_nvme_get_transport_statistics", 00:06:28.007 "bdev_nvme_apply_firmware", 00:06:28.007 "bdev_nvme_detach_controller", 00:06:28.007 "bdev_nvme_get_controllers", 00:06:28.007 "bdev_nvme_attach_controller", 00:06:28.007 "bdev_nvme_set_hotplug", 00:06:28.007 "bdev_nvme_set_options", 00:06:28.007 "bdev_passthru_delete", 00:06:28.007 "bdev_passthru_create", 00:06:28.007 "bdev_lvol_set_parent_bdev", 00:06:28.007 "bdev_lvol_set_parent", 00:06:28.007 "bdev_lvol_check_shallow_copy", 00:06:28.007 "bdev_lvol_start_shallow_copy", 00:06:28.007 "bdev_lvol_grow_lvstore", 00:06:28.007 "bdev_lvol_get_lvols", 00:06:28.007 "bdev_lvol_get_lvstores", 00:06:28.007 "bdev_lvol_delete", 00:06:28.007 "bdev_lvol_set_read_only", 00:06:28.007 "bdev_lvol_resize", 00:06:28.007 "bdev_lvol_decouple_parent", 00:06:28.007 "bdev_lvol_inflate", 00:06:28.007 "bdev_lvol_rename", 00:06:28.007 "bdev_lvol_clone_bdev", 00:06:28.007 "bdev_lvol_clone", 00:06:28.007 "bdev_lvol_snapshot", 00:06:28.007 "bdev_lvol_create", 00:06:28.007 "bdev_lvol_delete_lvstore", 00:06:28.007 "bdev_lvol_rename_lvstore", 00:06:28.007 "bdev_lvol_create_lvstore", 00:06:28.007 "bdev_raid_set_options", 00:06:28.007 "bdev_raid_remove_base_bdev", 00:06:28.007 "bdev_raid_add_base_bdev", 00:06:28.007 "bdev_raid_delete", 00:06:28.007 "bdev_raid_create", 00:06:28.007 "bdev_raid_get_bdevs", 00:06:28.007 "bdev_error_inject_error", 00:06:28.007 "bdev_error_delete", 00:06:28.007 "bdev_error_create", 00:06:28.007 "bdev_split_delete", 00:06:28.007 "bdev_split_create", 00:06:28.007 "bdev_delay_delete", 00:06:28.007 "bdev_delay_create", 00:06:28.007 "bdev_delay_update_latency", 00:06:28.007 "bdev_zone_block_delete", 00:06:28.007 "bdev_zone_block_create", 00:06:28.007 "blobfs_create", 00:06:28.007 "blobfs_detect", 00:06:28.007 "blobfs_set_cache_size", 00:06:28.007 "bdev_aio_delete", 00:06:28.007 "bdev_aio_rescan", 00:06:28.007 "bdev_aio_create", 00:06:28.007 "bdev_ftl_set_property", 00:06:28.007 "bdev_ftl_get_properties", 00:06:28.007 "bdev_ftl_get_stats", 00:06:28.007 "bdev_ftl_unmap", 00:06:28.007 "bdev_ftl_unload", 00:06:28.007 "bdev_ftl_delete", 00:06:28.007 "bdev_ftl_load", 00:06:28.007 "bdev_ftl_create", 00:06:28.007 "bdev_virtio_attach_controller", 00:06:28.007 "bdev_virtio_scsi_get_devices", 00:06:28.007 "bdev_virtio_detach_controller", 00:06:28.007 "bdev_virtio_blk_set_hotplug", 00:06:28.007 "bdev_iscsi_delete", 00:06:28.007 "bdev_iscsi_create", 00:06:28.007 "bdev_iscsi_set_options", 00:06:28.007 "accel_error_inject_error", 00:06:28.007 "ioat_scan_accel_module", 00:06:28.007 "dsa_scan_accel_module", 00:06:28.007 "iaa_scan_accel_module", 00:06:28.007 "keyring_file_remove_key", 00:06:28.007 "keyring_file_add_key", 00:06:28.007 "keyring_linux_set_options", 00:06:28.007 "fsdev_aio_delete", 00:06:28.007 "fsdev_aio_create", 00:06:28.007 "iscsi_get_histogram", 00:06:28.007 "iscsi_enable_histogram", 00:06:28.007 "iscsi_set_options", 00:06:28.007 "iscsi_get_auth_groups", 00:06:28.007 "iscsi_auth_group_remove_secret", 00:06:28.007 "iscsi_auth_group_add_secret", 00:06:28.007 "iscsi_delete_auth_group", 00:06:28.007 "iscsi_create_auth_group", 00:06:28.007 "iscsi_set_discovery_auth", 00:06:28.007 "iscsi_get_options", 00:06:28.007 "iscsi_target_node_request_logout", 00:06:28.007 "iscsi_target_node_set_redirect", 00:06:28.007 "iscsi_target_node_set_auth", 00:06:28.007 "iscsi_target_node_add_lun", 00:06:28.007 "iscsi_get_stats", 00:06:28.007 "iscsi_get_connections", 00:06:28.007 "iscsi_portal_group_set_auth", 00:06:28.007 "iscsi_start_portal_group", 00:06:28.007 "iscsi_delete_portal_group", 00:06:28.007 "iscsi_create_portal_group", 00:06:28.007 "iscsi_get_portal_groups", 00:06:28.007 "iscsi_delete_target_node", 00:06:28.007 "iscsi_target_node_remove_pg_ig_maps", 00:06:28.007 "iscsi_target_node_add_pg_ig_maps", 00:06:28.007 "iscsi_create_target_node", 00:06:28.007 "iscsi_get_target_nodes", 00:06:28.007 "iscsi_delete_initiator_group", 00:06:28.007 "iscsi_initiator_group_remove_initiators", 00:06:28.007 "iscsi_initiator_group_add_initiators", 00:06:28.007 "iscsi_create_initiator_group", 00:06:28.007 "iscsi_get_initiator_groups", 00:06:28.007 "nvmf_set_crdt", 00:06:28.007 "nvmf_set_config", 00:06:28.007 "nvmf_set_max_subsystems", 00:06:28.007 "nvmf_stop_mdns_prr", 00:06:28.007 "nvmf_publish_mdns_prr", 00:06:28.007 "nvmf_subsystem_get_listeners", 00:06:28.007 "nvmf_subsystem_get_qpairs", 00:06:28.007 "nvmf_subsystem_get_controllers", 00:06:28.007 "nvmf_get_stats", 00:06:28.007 "nvmf_get_transports", 00:06:28.007 "nvmf_create_transport", 00:06:28.007 "nvmf_get_targets", 00:06:28.007 "nvmf_delete_target", 00:06:28.007 "nvmf_create_target", 00:06:28.007 "nvmf_subsystem_allow_any_host", 00:06:28.007 "nvmf_subsystem_set_keys", 00:06:28.007 "nvmf_subsystem_remove_host", 00:06:28.007 "nvmf_subsystem_add_host", 00:06:28.007 "nvmf_ns_remove_host", 00:06:28.007 "nvmf_ns_add_host", 00:06:28.007 "nvmf_subsystem_remove_ns", 00:06:28.007 "nvmf_subsystem_set_ns_ana_group", 00:06:28.007 "nvmf_subsystem_add_ns", 00:06:28.007 "nvmf_subsystem_listener_set_ana_state", 00:06:28.007 "nvmf_discovery_get_referrals", 00:06:28.007 "nvmf_discovery_remove_referral", 00:06:28.007 "nvmf_discovery_add_referral", 00:06:28.007 "nvmf_subsystem_remove_listener", 00:06:28.007 "nvmf_subsystem_add_listener", 00:06:28.007 "nvmf_delete_subsystem", 00:06:28.007 "nvmf_create_subsystem", 00:06:28.007 "nvmf_get_subsystems", 00:06:28.007 "env_dpdk_get_mem_stats", 00:06:28.007 "nbd_get_disks", 00:06:28.007 "nbd_stop_disk", 00:06:28.007 "nbd_start_disk", 00:06:28.007 "ublk_recover_disk", 00:06:28.007 "ublk_get_disks", 00:06:28.007 "ublk_stop_disk", 00:06:28.007 "ublk_start_disk", 00:06:28.007 "ublk_destroy_target", 00:06:28.007 "ublk_create_target", 00:06:28.007 "virtio_blk_create_transport", 00:06:28.007 "virtio_blk_get_transports", 00:06:28.007 "vhost_controller_set_coalescing", 00:06:28.007 "vhost_get_controllers", 00:06:28.007 "vhost_delete_controller", 00:06:28.007 "vhost_create_blk_controller", 00:06:28.007 "vhost_scsi_controller_remove_target", 00:06:28.007 "vhost_scsi_controller_add_target", 00:06:28.007 "vhost_start_scsi_controller", 00:06:28.007 "vhost_create_scsi_controller", 00:06:28.007 "thread_set_cpumask", 00:06:28.007 "scheduler_set_options", 00:06:28.007 "framework_get_governor", 00:06:28.007 "framework_get_scheduler", 00:06:28.007 "framework_set_scheduler", 00:06:28.007 "framework_get_reactors", 00:06:28.007 "thread_get_io_channels", 00:06:28.007 "thread_get_pollers", 00:06:28.007 "thread_get_stats", 00:06:28.007 "framework_monitor_context_switch", 00:06:28.007 "spdk_kill_instance", 00:06:28.007 "log_enable_timestamps", 00:06:28.007 "log_get_flags", 00:06:28.007 "log_clear_flag", 00:06:28.007 "log_set_flag", 00:06:28.007 "log_get_level", 00:06:28.007 "log_set_level", 00:06:28.007 "log_get_print_level", 00:06:28.007 "log_set_print_level", 00:06:28.007 "framework_enable_cpumask_locks", 00:06:28.007 "framework_disable_cpumask_locks", 00:06:28.007 "framework_wait_init", 00:06:28.007 "framework_start_init", 00:06:28.007 "scsi_get_devices", 00:06:28.007 "bdev_get_histogram", 00:06:28.007 "bdev_enable_histogram", 00:06:28.007 "bdev_set_qos_limit", 00:06:28.007 "bdev_set_qd_sampling_period", 00:06:28.007 "bdev_get_bdevs", 00:06:28.007 "bdev_reset_iostat", 00:06:28.007 "bdev_get_iostat", 00:06:28.007 "bdev_examine", 00:06:28.007 "bdev_wait_for_examine", 00:06:28.007 "bdev_set_options", 00:06:28.007 "accel_get_stats", 00:06:28.007 "accel_set_options", 00:06:28.007 "accel_set_driver", 00:06:28.007 "accel_crypto_key_destroy", 00:06:28.007 "accel_crypto_keys_get", 00:06:28.007 "accel_crypto_key_create", 00:06:28.007 "accel_assign_opc", 00:06:28.007 "accel_get_module_info", 00:06:28.007 "accel_get_opc_assignments", 00:06:28.007 "vmd_rescan", 00:06:28.007 "vmd_remove_device", 00:06:28.007 "vmd_enable", 00:06:28.007 "sock_get_default_impl", 00:06:28.007 "sock_set_default_impl", 00:06:28.007 "sock_impl_set_options", 00:06:28.007 "sock_impl_get_options", 00:06:28.007 "iobuf_get_stats", 00:06:28.007 "iobuf_set_options", 00:06:28.007 "keyring_get_keys", 00:06:28.007 "framework_get_pci_devices", 00:06:28.007 "framework_get_config", 00:06:28.007 "framework_get_subsystems", 00:06:28.007 "fsdev_set_opts", 00:06:28.007 "fsdev_get_opts", 00:06:28.007 "trace_get_info", 00:06:28.007 "trace_get_tpoint_group_mask", 00:06:28.007 "trace_disable_tpoint_group", 00:06:28.007 "trace_enable_tpoint_group", 00:06:28.007 "trace_clear_tpoint_mask", 00:06:28.007 "trace_set_tpoint_mask", 00:06:28.007 "notify_get_notifications", 00:06:28.007 "notify_get_types", 00:06:28.007 "spdk_get_version", 00:06:28.007 "rpc_get_methods" 00:06:28.007 ] 00:06:28.266 14:17:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:28.266 14:17:06 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:28.266 14:17:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:28.266 14:17:07 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:28.266 14:17:07 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57847 00:06:28.266 14:17:07 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57847 ']' 00:06:28.266 14:17:07 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57847 00:06:28.266 14:17:07 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:28.266 14:17:07 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.266 14:17:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57847 00:06:28.266 14:17:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.266 14:17:07 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.266 14:17:07 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57847' 00:06:28.266 killing process with pid 57847 00:06:28.266 14:17:07 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57847 00:06:28.266 14:17:07 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57847 00:06:30.796 00:06:30.796 real 0m4.104s 00:06:30.796 user 0m7.508s 00:06:30.796 sys 0m0.616s 00:06:30.796 14:17:09 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.796 14:17:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:30.796 ************************************ 00:06:30.796 END TEST spdkcli_tcp 00:06:30.796 ************************************ 00:06:30.796 14:17:09 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:30.796 14:17:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.796 14:17:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.796 14:17:09 -- common/autotest_common.sh@10 -- # set +x 00:06:30.796 ************************************ 00:06:30.796 START TEST dpdk_mem_utility 00:06:30.796 ************************************ 00:06:30.796 14:17:09 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:30.796 * Looking for test storage... 00:06:30.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:30.796 14:17:09 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:30.796 14:17:09 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:30.796 14:17:09 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:30.796 14:17:09 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:30.796 14:17:09 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.796 14:17:09 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.796 14:17:09 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.796 14:17:09 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.796 14:17:09 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.796 14:17:09 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.796 14:17:09 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.796 14:17:09 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.796 14:17:09 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.796 14:17:09 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.796 14:17:09 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.796 14:17:09 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:30.796 14:17:09 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:30.796 14:17:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.797 14:17:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.797 14:17:09 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:30.797 14:17:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:30.797 14:17:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.797 14:17:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:30.797 14:17:09 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.797 14:17:09 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:30.797 14:17:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:30.797 14:17:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.797 14:17:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:30.797 14:17:09 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.797 14:17:09 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.797 14:17:09 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.797 14:17:09 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:30.797 14:17:09 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.797 14:17:09 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:30.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.797 --rc genhtml_branch_coverage=1 00:06:30.797 --rc genhtml_function_coverage=1 00:06:30.797 --rc genhtml_legend=1 00:06:30.797 --rc geninfo_all_blocks=1 00:06:30.797 --rc geninfo_unexecuted_blocks=1 00:06:30.797 00:06:30.797 ' 00:06:30.797 14:17:09 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:30.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.797 --rc genhtml_branch_coverage=1 00:06:30.797 --rc genhtml_function_coverage=1 00:06:30.797 --rc genhtml_legend=1 00:06:30.797 --rc geninfo_all_blocks=1 00:06:30.797 --rc geninfo_unexecuted_blocks=1 00:06:30.797 00:06:30.797 ' 00:06:30.797 14:17:09 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:30.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.797 --rc genhtml_branch_coverage=1 00:06:30.797 --rc genhtml_function_coverage=1 00:06:30.797 --rc genhtml_legend=1 00:06:30.797 --rc geninfo_all_blocks=1 00:06:30.797 --rc geninfo_unexecuted_blocks=1 00:06:30.797 00:06:30.797 ' 00:06:30.797 14:17:09 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:30.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.797 --rc genhtml_branch_coverage=1 00:06:30.797 --rc genhtml_function_coverage=1 00:06:30.797 --rc genhtml_legend=1 00:06:30.797 --rc geninfo_all_blocks=1 00:06:30.797 --rc geninfo_unexecuted_blocks=1 00:06:30.797 00:06:30.797 ' 00:06:30.797 14:17:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:30.797 14:17:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57969 00:06:30.797 14:17:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:30.797 14:17:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57969 00:06:30.797 14:17:09 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57969 ']' 00:06:30.797 14:17:09 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.797 14:17:09 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.797 14:17:09 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.797 14:17:09 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.797 14:17:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:30.797 [2024-11-20 14:17:09.644130] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:30.797 [2024-11-20 14:17:09.644321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57969 ] 00:06:31.099 [2024-11-20 14:17:09.830143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.099 [2024-11-20 14:17:09.985809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.057 14:17:10 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.057 14:17:10 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:32.057 14:17:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:32.057 14:17:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:32.057 14:17:10 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.057 14:17:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:32.057 { 00:06:32.057 "filename": "/tmp/spdk_mem_dump.txt" 00:06:32.057 } 00:06:32.057 14:17:10 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.057 14:17:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:32.057 DPDK memory size 824.000000 MiB in 1 heap(s) 00:06:32.057 1 heaps totaling size 824.000000 MiB 00:06:32.057 size: 824.000000 MiB heap id: 0 00:06:32.057 end heaps---------- 00:06:32.057 9 mempools totaling size 603.782043 MiB 00:06:32.057 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:32.057 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:32.057 size: 100.555481 MiB name: bdev_io_57969 00:06:32.057 size: 50.003479 MiB name: msgpool_57969 00:06:32.057 size: 36.509338 MiB name: fsdev_io_57969 00:06:32.057 size: 21.763794 MiB name: PDU_Pool 00:06:32.057 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:32.057 size: 4.133484 MiB name: evtpool_57969 00:06:32.057 size: 0.026123 MiB name: Session_Pool 00:06:32.057 end mempools------- 00:06:32.057 6 memzones totaling size 4.142822 MiB 00:06:32.057 size: 1.000366 MiB name: RG_ring_0_57969 00:06:32.057 size: 1.000366 MiB name: RG_ring_1_57969 00:06:32.057 size: 1.000366 MiB name: RG_ring_4_57969 00:06:32.057 size: 1.000366 MiB name: RG_ring_5_57969 00:06:32.057 size: 0.125366 MiB name: RG_ring_2_57969 00:06:32.057 size: 0.015991 MiB name: RG_ring_3_57969 00:06:32.057 end memzones------- 00:06:32.057 14:17:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:32.317 heap id: 0 total size: 824.000000 MiB number of busy elements: 315 number of free elements: 18 00:06:32.317 list of free elements. size: 16.781372 MiB 00:06:32.317 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:32.317 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:32.317 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:32.317 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:32.317 element at address: 0x200019900040 with size: 0.999939 MiB 00:06:32.317 element at address: 0x200019a00000 with size: 0.999084 MiB 00:06:32.317 element at address: 0x200032600000 with size: 0.994324 MiB 00:06:32.317 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:32.317 element at address: 0x200019200000 with size: 0.959656 MiB 00:06:32.317 element at address: 0x200019d00040 with size: 0.936401 MiB 00:06:32.317 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:32.317 element at address: 0x20001b400000 with size: 0.562683 MiB 00:06:32.317 element at address: 0x200000c00000 with size: 0.489197 MiB 00:06:32.317 element at address: 0x200019600000 with size: 0.487976 MiB 00:06:32.317 element at address: 0x200019e00000 with size: 0.485413 MiB 00:06:32.317 element at address: 0x200012c00000 with size: 0.433472 MiB 00:06:32.317 element at address: 0x200028800000 with size: 0.390442 MiB 00:06:32.317 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:32.317 list of standard malloc elements. size: 199.287720 MiB 00:06:32.317 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:32.317 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:32.317 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:32.317 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:32.317 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:06:32.317 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:32.317 element at address: 0x200019deff40 with size: 0.062683 MiB 00:06:32.317 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:32.317 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:32.317 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:06:32.317 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:32.317 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:32.317 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:32.317 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:32.317 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:32.317 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:32.317 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:32.317 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:32.317 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:32.317 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:32.317 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:32.318 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200019affc40 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:06:32.318 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:06:32.319 element at address: 0x200028863f40 with size: 0.000244 MiB 00:06:32.319 element at address: 0x200028864040 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886af80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886b080 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886b180 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886b280 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886b380 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886b480 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886b580 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886b680 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886b780 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886b880 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886b980 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886be80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886c080 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886c180 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886c280 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886c380 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886c480 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886c580 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886c680 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886c780 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886c880 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886c980 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886d080 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886d180 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886d280 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886d380 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886d480 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886d580 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886d680 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886d780 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886d880 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886d980 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886da80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886db80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886de80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886df80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886e080 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886e180 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886e280 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886e380 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886e480 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886e580 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886e680 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886e780 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886e880 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886e980 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886f080 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886f180 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886f280 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886f380 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886f480 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886f580 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886f680 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886f780 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886f880 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886f980 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:06:32.319 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:06:32.319 list of memzone associated elements. size: 607.930908 MiB 00:06:32.319 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:06:32.319 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:32.319 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:06:32.319 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:32.320 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:06:32.320 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57969_0 00:06:32.320 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:32.320 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57969_0 00:06:32.320 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:32.320 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57969_0 00:06:32.320 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:06:32.320 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:32.320 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:06:32.320 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:32.320 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:32.320 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57969_0 00:06:32.320 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:32.320 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57969 00:06:32.320 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:32.320 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57969 00:06:32.320 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:06:32.320 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:32.320 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:06:32.320 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:32.320 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:32.320 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:32.320 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:06:32.320 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:32.320 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:32.320 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57969 00:06:32.320 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:32.320 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57969 00:06:32.320 element at address: 0x200019affd40 with size: 1.000549 MiB 00:06:32.320 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57969 00:06:32.320 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:06:32.320 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57969 00:06:32.320 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:32.320 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57969 00:06:32.320 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:32.320 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57969 00:06:32.320 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:06:32.320 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:32.320 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:06:32.320 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:32.320 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:06:32.320 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:32.320 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:32.320 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57969 00:06:32.320 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:32.320 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57969 00:06:32.320 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:06:32.320 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:32.320 element at address: 0x200028864140 with size: 0.023804 MiB 00:06:32.320 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:32.320 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:32.320 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57969 00:06:32.320 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:06:32.320 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:32.320 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:32.320 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57969 00:06:32.320 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:32.320 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57969 00:06:32.320 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:32.320 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57969 00:06:32.320 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:06:32.320 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:32.320 14:17:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:32.320 14:17:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57969 00:06:32.320 14:17:11 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57969 ']' 00:06:32.320 14:17:11 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57969 00:06:32.320 14:17:11 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:32.320 14:17:11 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.320 14:17:11 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57969 00:06:32.320 14:17:11 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.320 14:17:11 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.320 killing process with pid 57969 00:06:32.320 14:17:11 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57969' 00:06:32.320 14:17:11 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57969 00:06:32.320 14:17:11 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57969 00:06:34.854 00:06:34.854 real 0m3.970s 00:06:34.854 user 0m4.024s 00:06:34.854 sys 0m0.602s 00:06:34.854 14:17:13 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.854 14:17:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:34.854 ************************************ 00:06:34.854 END TEST dpdk_mem_utility 00:06:34.854 ************************************ 00:06:34.854 14:17:13 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:34.854 14:17:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.854 14:17:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.854 14:17:13 -- common/autotest_common.sh@10 -- # set +x 00:06:34.854 ************************************ 00:06:34.854 START TEST event 00:06:34.854 ************************************ 00:06:34.854 14:17:13 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:34.854 * Looking for test storage... 00:06:34.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:34.854 14:17:13 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:34.854 14:17:13 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:34.854 14:17:13 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:34.854 14:17:13 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:34.854 14:17:13 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.854 14:17:13 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.854 14:17:13 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.855 14:17:13 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.855 14:17:13 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.855 14:17:13 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.855 14:17:13 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.855 14:17:13 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.855 14:17:13 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.855 14:17:13 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.855 14:17:13 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.855 14:17:13 event -- scripts/common.sh@344 -- # case "$op" in 00:06:34.855 14:17:13 event -- scripts/common.sh@345 -- # : 1 00:06:34.855 14:17:13 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.855 14:17:13 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.855 14:17:13 event -- scripts/common.sh@365 -- # decimal 1 00:06:34.855 14:17:13 event -- scripts/common.sh@353 -- # local d=1 00:06:34.855 14:17:13 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.855 14:17:13 event -- scripts/common.sh@355 -- # echo 1 00:06:34.855 14:17:13 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.855 14:17:13 event -- scripts/common.sh@366 -- # decimal 2 00:06:34.855 14:17:13 event -- scripts/common.sh@353 -- # local d=2 00:06:34.855 14:17:13 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.855 14:17:13 event -- scripts/common.sh@355 -- # echo 2 00:06:34.855 14:17:13 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.855 14:17:13 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.855 14:17:13 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.855 14:17:13 event -- scripts/common.sh@368 -- # return 0 00:06:34.855 14:17:13 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.855 14:17:13 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:34.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.855 --rc genhtml_branch_coverage=1 00:06:34.855 --rc genhtml_function_coverage=1 00:06:34.855 --rc genhtml_legend=1 00:06:34.855 --rc geninfo_all_blocks=1 00:06:34.855 --rc geninfo_unexecuted_blocks=1 00:06:34.855 00:06:34.855 ' 00:06:34.855 14:17:13 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:34.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.855 --rc genhtml_branch_coverage=1 00:06:34.855 --rc genhtml_function_coverage=1 00:06:34.855 --rc genhtml_legend=1 00:06:34.855 --rc geninfo_all_blocks=1 00:06:34.855 --rc geninfo_unexecuted_blocks=1 00:06:34.855 00:06:34.855 ' 00:06:34.855 14:17:13 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:34.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.855 --rc genhtml_branch_coverage=1 00:06:34.855 --rc genhtml_function_coverage=1 00:06:34.855 --rc genhtml_legend=1 00:06:34.855 --rc geninfo_all_blocks=1 00:06:34.855 --rc geninfo_unexecuted_blocks=1 00:06:34.855 00:06:34.855 ' 00:06:34.855 14:17:13 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:34.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.855 --rc genhtml_branch_coverage=1 00:06:34.855 --rc genhtml_function_coverage=1 00:06:34.855 --rc genhtml_legend=1 00:06:34.855 --rc geninfo_all_blocks=1 00:06:34.855 --rc geninfo_unexecuted_blocks=1 00:06:34.855 00:06:34.855 ' 00:06:34.855 14:17:13 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:34.855 14:17:13 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:34.855 14:17:13 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:34.855 14:17:13 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:34.855 14:17:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.855 14:17:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.855 ************************************ 00:06:34.855 START TEST event_perf 00:06:34.855 ************************************ 00:06:34.855 14:17:13 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:34.855 Running I/O for 1 seconds...[2024-11-20 14:17:13.576488] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:34.855 [2024-11-20 14:17:13.576639] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58077 ] 00:06:34.855 [2024-11-20 14:17:13.762025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:35.114 [2024-11-20 14:17:13.926408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.114 [2024-11-20 14:17:13.926515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.114 [2024-11-20 14:17:13.927655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.114 [2024-11-20 14:17:13.927695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.491 Running I/O for 1 seconds... 00:06:36.491 lcore 0: 192683 00:06:36.491 lcore 1: 192684 00:06:36.491 lcore 2: 192685 00:06:36.491 lcore 3: 192685 00:06:36.491 done. 00:06:36.491 00:06:36.491 real 0m1.615s 00:06:36.491 user 0m4.371s 00:06:36.491 sys 0m0.118s 00:06:36.491 14:17:15 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.491 14:17:15 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:36.491 ************************************ 00:06:36.491 END TEST event_perf 00:06:36.491 ************************************ 00:06:36.491 14:17:15 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:36.491 14:17:15 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:36.491 14:17:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.491 14:17:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.491 ************************************ 00:06:36.491 START TEST event_reactor 00:06:36.491 ************************************ 00:06:36.491 14:17:15 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:36.491 [2024-11-20 14:17:15.252280] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:36.491 [2024-11-20 14:17:15.253349] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58122 ] 00:06:36.491 [2024-11-20 14:17:15.465083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.750 [2024-11-20 14:17:15.611134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.124 test_start 00:06:38.124 oneshot 00:06:38.124 tick 100 00:06:38.124 tick 100 00:06:38.124 tick 250 00:06:38.124 tick 100 00:06:38.124 tick 100 00:06:38.124 tick 100 00:06:38.124 tick 250 00:06:38.124 tick 500 00:06:38.124 tick 100 00:06:38.124 tick 100 00:06:38.124 tick 250 00:06:38.124 tick 100 00:06:38.124 tick 100 00:06:38.124 test_end 00:06:38.124 00:06:38.124 real 0m1.628s 00:06:38.124 user 0m1.403s 00:06:38.124 sys 0m0.115s 00:06:38.124 14:17:16 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.124 14:17:16 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:38.124 ************************************ 00:06:38.124 END TEST event_reactor 00:06:38.124 ************************************ 00:06:38.124 14:17:16 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:38.124 14:17:16 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:38.124 14:17:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.124 14:17:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.124 ************************************ 00:06:38.124 START TEST event_reactor_perf 00:06:38.124 ************************************ 00:06:38.124 14:17:16 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:38.124 [2024-11-20 14:17:16.923322] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:38.124 [2024-11-20 14:17:16.923456] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58159 ] 00:06:38.124 [2024-11-20 14:17:17.096313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.382 [2024-11-20 14:17:17.228479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.813 test_start 00:06:39.813 test_end 00:06:39.813 Performance: 283722 events per second 00:06:39.813 00:06:39.813 real 0m1.579s 00:06:39.813 user 0m1.362s 00:06:39.813 sys 0m0.107s 00:06:39.813 14:17:18 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.813 14:17:18 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:39.813 ************************************ 00:06:39.813 END TEST event_reactor_perf 00:06:39.813 ************************************ 00:06:39.813 14:17:18 event -- event/event.sh@49 -- # uname -s 00:06:39.813 14:17:18 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:39.813 14:17:18 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:39.813 14:17:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.813 14:17:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.813 14:17:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.813 ************************************ 00:06:39.813 START TEST event_scheduler 00:06:39.813 ************************************ 00:06:39.813 14:17:18 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:39.814 * Looking for test storage... 00:06:39.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:39.814 14:17:18 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:39.814 14:17:18 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:39.814 14:17:18 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:39.814 14:17:18 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.814 14:17:18 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:39.814 14:17:18 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.814 14:17:18 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:39.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.814 --rc genhtml_branch_coverage=1 00:06:39.814 --rc genhtml_function_coverage=1 00:06:39.814 --rc genhtml_legend=1 00:06:39.814 --rc geninfo_all_blocks=1 00:06:39.814 --rc geninfo_unexecuted_blocks=1 00:06:39.814 00:06:39.814 ' 00:06:39.814 14:17:18 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:39.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.814 --rc genhtml_branch_coverage=1 00:06:39.814 --rc genhtml_function_coverage=1 00:06:39.814 --rc genhtml_legend=1 00:06:39.814 --rc geninfo_all_blocks=1 00:06:39.814 --rc geninfo_unexecuted_blocks=1 00:06:39.814 00:06:39.814 ' 00:06:39.814 14:17:18 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:39.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.814 --rc genhtml_branch_coverage=1 00:06:39.814 --rc genhtml_function_coverage=1 00:06:39.814 --rc genhtml_legend=1 00:06:39.814 --rc geninfo_all_blocks=1 00:06:39.814 --rc geninfo_unexecuted_blocks=1 00:06:39.814 00:06:39.814 ' 00:06:39.814 14:17:18 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:39.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.814 --rc genhtml_branch_coverage=1 00:06:39.814 --rc genhtml_function_coverage=1 00:06:39.814 --rc genhtml_legend=1 00:06:39.814 --rc geninfo_all_blocks=1 00:06:39.814 --rc geninfo_unexecuted_blocks=1 00:06:39.814 00:06:39.814 ' 00:06:39.814 14:17:18 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:39.814 14:17:18 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58229 00:06:39.814 14:17:18 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:39.814 14:17:18 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.814 14:17:18 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58229 00:06:39.814 14:17:18 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58229 ']' 00:06:39.814 14:17:18 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.814 14:17:18 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.814 14:17:18 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.814 14:17:18 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.814 14:17:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:40.073 [2024-11-20 14:17:18.825676] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:40.073 [2024-11-20 14:17:18.825876] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58229 ] 00:06:40.073 [2024-11-20 14:17:19.015533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:40.332 [2024-11-20 14:17:19.155048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.332 [2024-11-20 14:17:19.155145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.332 [2024-11-20 14:17:19.155285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.332 [2024-11-20 14:17:19.155302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.900 14:17:19 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.900 14:17:19 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:40.900 14:17:19 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:40.900 14:17:19 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.900 14:17:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:40.900 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:40.900 POWER: Cannot set governor of lcore 0 to userspace 00:06:40.900 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:40.900 POWER: Cannot set governor of lcore 0 to performance 00:06:40.900 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:40.900 POWER: Cannot set governor of lcore 0 to userspace 00:06:40.900 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:40.900 POWER: Cannot set governor of lcore 0 to userspace 00:06:40.900 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:40.900 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:40.900 POWER: Unable to set Power Management Environment for lcore 0 00:06:40.900 [2024-11-20 14:17:19.869486] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:40.900 [2024-11-20 14:17:19.869516] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:40.900 [2024-11-20 14:17:19.869531] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:40.900 [2024-11-20 14:17:19.869557] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:40.900 [2024-11-20 14:17:19.869571] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:40.900 [2024-11-20 14:17:19.869585] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:40.900 14:17:19 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.900 14:17:19 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:40.900 14:17:19 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.900 14:17:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:41.467 [2024-11-20 14:17:20.201793] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:41.467 14:17:20 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.467 14:17:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:41.467 14:17:20 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.467 14:17:20 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.467 14:17:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:41.467 ************************************ 00:06:41.467 START TEST scheduler_create_thread 00:06:41.467 ************************************ 00:06:41.467 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:41.467 14:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:41.467 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.467 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.467 2 00:06:41.467 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.467 14:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:41.467 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.467 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.467 3 00:06:41.467 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.467 14:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:41.467 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.467 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.467 4 00:06:41.467 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.467 14:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:41.467 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.468 5 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.468 6 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.468 7 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.468 8 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.468 9 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.468 10 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.468 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.035 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.035 00:06:42.035 real 0m0.591s 00:06:42.035 user 0m0.014s 00:06:42.035 sys 0m0.004s 00:06:42.035 ************************************ 00:06:42.035 END TEST scheduler_create_thread 00:06:42.035 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.035 14:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.035 ************************************ 00:06:42.035 14:17:20 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:42.035 14:17:20 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58229 00:06:42.035 14:17:20 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58229 ']' 00:06:42.035 14:17:20 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58229 00:06:42.035 14:17:20 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:42.035 14:17:20 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.035 14:17:20 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58229 00:06:42.035 14:17:20 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:42.035 killing process with pid 58229 00:06:42.035 14:17:20 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:42.035 14:17:20 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58229' 00:06:42.035 14:17:20 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58229 00:06:42.035 14:17:20 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58229 00:06:42.601 [2024-11-20 14:17:21.283857] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:43.538 00:06:43.538 real 0m3.852s 00:06:43.538 user 0m7.671s 00:06:43.538 sys 0m0.529s 00:06:43.538 14:17:22 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.538 ************************************ 00:06:43.538 END TEST event_scheduler 00:06:43.538 14:17:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:43.538 ************************************ 00:06:43.538 14:17:22 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:43.538 14:17:22 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:43.538 14:17:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.538 14:17:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.538 14:17:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.538 ************************************ 00:06:43.538 START TEST app_repeat 00:06:43.538 ************************************ 00:06:43.538 14:17:22 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:43.538 14:17:22 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.538 14:17:22 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.538 14:17:22 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:43.538 14:17:22 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.538 14:17:22 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:43.538 14:17:22 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:43.538 14:17:22 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:43.538 14:17:22 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58324 00:06:43.538 14:17:22 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:43.538 Process app_repeat pid: 58324 00:06:43.538 14:17:22 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58324' 00:06:43.538 14:17:22 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:43.538 14:17:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:43.538 spdk_app_start Round 0 00:06:43.538 14:17:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:43.538 14:17:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58324 /var/tmp/spdk-nbd.sock 00:06:43.538 14:17:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58324 ']' 00:06:43.538 14:17:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.538 14:17:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.538 14:17:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.538 14:17:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.538 14:17:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.539 [2024-11-20 14:17:22.477226] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:43.539 [2024-11-20 14:17:22.477965] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58324 ] 00:06:43.797 [2024-11-20 14:17:22.656671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:44.056 [2024-11-20 14:17:22.792491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.056 [2024-11-20 14:17:22.792495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.624 14:17:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.624 14:17:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:44.624 14:17:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.882 Malloc0 00:06:44.882 14:17:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.449 Malloc1 00:06:45.449 14:17:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.449 14:17:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.449 14:17:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.449 14:17:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:45.449 14:17:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.449 14:17:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:45.449 14:17:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.449 14:17:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.449 14:17:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.449 14:17:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:45.449 14:17:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.449 14:17:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:45.449 14:17:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:45.449 14:17:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:45.449 14:17:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.449 14:17:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:45.709 /dev/nbd0 00:06:45.709 14:17:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:45.709 14:17:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:45.709 14:17:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:45.709 14:17:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:45.709 14:17:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.709 14:17:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.709 14:17:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:45.709 14:17:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:45.709 14:17:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.709 14:17:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.709 14:17:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.709 1+0 records in 00:06:45.709 1+0 records out 00:06:45.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253928 s, 16.1 MB/s 00:06:45.709 14:17:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.709 14:17:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:45.709 14:17:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.709 14:17:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.709 14:17:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:45.709 14:17:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.709 14:17:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.709 14:17:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:45.968 /dev/nbd1 00:06:45.968 14:17:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:45.968 14:17:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:45.968 14:17:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:45.968 14:17:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:45.968 14:17:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.968 14:17:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.968 14:17:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:45.968 14:17:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:45.968 14:17:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.968 14:17:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.968 14:17:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.968 1+0 records in 00:06:45.968 1+0 records out 00:06:45.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362771 s, 11.3 MB/s 00:06:45.968 14:17:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.968 14:17:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:45.968 14:17:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.968 14:17:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.968 14:17:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:45.968 14:17:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.968 14:17:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.968 14:17:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.968 14:17:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.968 14:17:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.535 14:17:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.535 { 00:06:46.535 "nbd_device": "/dev/nbd0", 00:06:46.535 "bdev_name": "Malloc0" 00:06:46.535 }, 00:06:46.535 { 00:06:46.535 "nbd_device": "/dev/nbd1", 00:06:46.535 "bdev_name": "Malloc1" 00:06:46.535 } 00:06:46.535 ]' 00:06:46.535 14:17:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.535 { 00:06:46.535 "nbd_device": "/dev/nbd0", 00:06:46.535 "bdev_name": "Malloc0" 00:06:46.535 }, 00:06:46.535 { 00:06:46.535 "nbd_device": "/dev/nbd1", 00:06:46.535 "bdev_name": "Malloc1" 00:06:46.535 } 00:06:46.535 ]' 00:06:46.535 14:17:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.535 14:17:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:46.536 /dev/nbd1' 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:46.536 /dev/nbd1' 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:46.536 256+0 records in 00:06:46.536 256+0 records out 00:06:46.536 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00947859 s, 111 MB/s 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:46.536 256+0 records in 00:06:46.536 256+0 records out 00:06:46.536 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289885 s, 36.2 MB/s 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:46.536 256+0 records in 00:06:46.536 256+0 records out 00:06:46.536 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0327466 s, 32.0 MB/s 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.536 14:17:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:46.795 14:17:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:46.795 14:17:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:46.795 14:17:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:46.795 14:17:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.795 14:17:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.795 14:17:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:46.795 14:17:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.795 14:17:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.795 14:17:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.795 14:17:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:47.362 14:17:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:47.362 14:17:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:47.362 14:17:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:47.362 14:17:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.362 14:17:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.362 14:17:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:47.362 14:17:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.362 14:17:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.362 14:17:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.362 14:17:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.362 14:17:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.621 14:17:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:47.621 14:17:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:47.621 14:17:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.621 14:17:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.621 14:17:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.621 14:17:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.622 14:17:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:47.622 14:17:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:47.622 14:17:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:47.622 14:17:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:47.622 14:17:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:47.622 14:17:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:47.622 14:17:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:48.188 14:17:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:49.124 [2024-11-20 14:17:28.072527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:49.382 [2024-11-20 14:17:28.208735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.382 [2024-11-20 14:17:28.208751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.676 [2024-11-20 14:17:28.408190] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:49.676 [2024-11-20 14:17:28.408278] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:51.051 spdk_app_start Round 1 00:06:51.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.051 14:17:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:51.051 14:17:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:51.052 14:17:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58324 /var/tmp/spdk-nbd.sock 00:06:51.052 14:17:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58324 ']' 00:06:51.052 14:17:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.052 14:17:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.052 14:17:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.052 14:17:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.052 14:17:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.618 14:17:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.618 14:17:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:51.618 14:17:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.876 Malloc0 00:06:51.876 14:17:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:52.135 Malloc1 00:06:52.135 14:17:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.135 14:17:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.135 14:17:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.135 14:17:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:52.135 14:17:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.135 14:17:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:52.135 14:17:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.135 14:17:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.135 14:17:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.135 14:17:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:52.135 14:17:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.135 14:17:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:52.135 14:17:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:52.135 14:17:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:52.135 14:17:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.135 14:17:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:52.393 /dev/nbd0 00:06:52.393 14:17:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:52.393 14:17:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:52.393 14:17:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:52.393 14:17:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:52.393 14:17:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:52.393 14:17:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:52.393 14:17:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:52.393 14:17:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:52.393 14:17:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:52.393 14:17:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:52.393 14:17:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.393 1+0 records in 00:06:52.393 1+0 records out 00:06:52.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304881 s, 13.4 MB/s 00:06:52.393 14:17:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.393 14:17:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:52.393 14:17:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.393 14:17:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:52.393 14:17:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:52.393 14:17:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.393 14:17:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.393 14:17:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:52.652 /dev/nbd1 00:06:52.652 14:17:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:52.652 14:17:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:52.652 14:17:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:52.652 14:17:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:52.652 14:17:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:52.652 14:17:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:52.652 14:17:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:52.911 14:17:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:52.911 14:17:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:52.911 14:17:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:52.911 14:17:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.911 1+0 records in 00:06:52.911 1+0 records out 00:06:52.911 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224015 s, 18.3 MB/s 00:06:52.911 14:17:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.911 14:17:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:52.911 14:17:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.911 14:17:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:52.911 14:17:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:52.911 14:17:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.911 14:17:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.911 14:17:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.911 14:17:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.911 14:17:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.170 14:17:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:53.171 { 00:06:53.171 "nbd_device": "/dev/nbd0", 00:06:53.171 "bdev_name": "Malloc0" 00:06:53.171 }, 00:06:53.171 { 00:06:53.171 "nbd_device": "/dev/nbd1", 00:06:53.171 "bdev_name": "Malloc1" 00:06:53.171 } 00:06:53.171 ]' 00:06:53.171 14:17:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:53.171 { 00:06:53.171 "nbd_device": "/dev/nbd0", 00:06:53.171 "bdev_name": "Malloc0" 00:06:53.171 }, 00:06:53.171 { 00:06:53.171 "nbd_device": "/dev/nbd1", 00:06:53.171 "bdev_name": "Malloc1" 00:06:53.171 } 00:06:53.171 ]' 00:06:53.171 14:17:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:53.171 /dev/nbd1' 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:53.171 /dev/nbd1' 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:53.171 256+0 records in 00:06:53.171 256+0 records out 00:06:53.171 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00530589 s, 198 MB/s 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:53.171 256+0 records in 00:06:53.171 256+0 records out 00:06:53.171 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274011 s, 38.3 MB/s 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:53.171 256+0 records in 00:06:53.171 256+0 records out 00:06:53.171 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284155 s, 36.9 MB/s 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.171 14:17:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:53.430 14:17:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:53.430 14:17:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:53.430 14:17:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:53.430 14:17:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.430 14:17:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.430 14:17:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:53.430 14:17:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.430 14:17:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.430 14:17:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.430 14:17:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.690 14:17:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.949 14:17:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.949 14:17:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.949 14:17:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.949 14:17:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.949 14:17:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.949 14:17:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.949 14:17:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.949 14:17:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.949 14:17:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.949 14:17:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.209 14:17:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:54.209 14:17:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:54.209 14:17:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.209 14:17:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:54.209 14:17:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.209 14:17:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:54.209 14:17:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:54.209 14:17:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:54.209 14:17:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:54.209 14:17:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:54.209 14:17:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:54.209 14:17:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:54.209 14:17:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:54.776 14:17:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:55.710 [2024-11-20 14:17:34.591559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:55.968 [2024-11-20 14:17:34.721130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.968 [2024-11-20 14:17:34.721153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.968 [2024-11-20 14:17:34.917480] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:55.968 [2024-11-20 14:17:34.917616] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:57.873 spdk_app_start Round 2 00:06:57.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:57.873 14:17:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:57.873 14:17:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:57.873 14:17:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58324 /var/tmp/spdk-nbd.sock 00:06:57.873 14:17:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58324 ']' 00:06:57.873 14:17:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:57.873 14:17:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.873 14:17:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:57.873 14:17:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.873 14:17:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:57.873 14:17:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.873 14:17:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:57.873 14:17:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.440 Malloc0 00:06:58.440 14:17:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.699 Malloc1 00:06:58.699 14:17:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.699 14:17:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.699 14:17:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.699 14:17:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.699 14:17:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.699 14:17:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.699 14:17:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.699 14:17:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.699 14:17:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.699 14:17:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.699 14:17:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.699 14:17:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.699 14:17:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:58.699 14:17:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.699 14:17:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.699 14:17:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:58.958 /dev/nbd0 00:06:58.958 14:17:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:58.958 14:17:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:58.958 14:17:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:58.958 14:17:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:58.958 14:17:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.958 14:17:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.958 14:17:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:58.958 14:17:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:58.958 14:17:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.958 14:17:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.958 14:17:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.958 1+0 records in 00:06:58.958 1+0 records out 00:06:58.958 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426834 s, 9.6 MB/s 00:06:58.958 14:17:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.958 14:17:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:58.958 14:17:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.958 14:17:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.958 14:17:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:58.958 14:17:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.958 14:17:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.958 14:17:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:59.217 /dev/nbd1 00:06:59.217 14:17:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:59.217 14:17:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:59.217 14:17:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:59.217 14:17:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:59.217 14:17:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.217 14:17:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.217 14:17:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:59.217 14:17:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:59.217 14:17:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.217 14:17:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.217 14:17:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:59.217 1+0 records in 00:06:59.217 1+0 records out 00:06:59.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417374 s, 9.8 MB/s 00:06:59.217 14:17:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:59.217 14:17:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:59.217 14:17:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:59.217 14:17:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.217 14:17:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:59.217 14:17:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.217 14:17:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.217 14:17:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.217 14:17:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.217 14:17:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:59.786 { 00:06:59.786 "nbd_device": "/dev/nbd0", 00:06:59.786 "bdev_name": "Malloc0" 00:06:59.786 }, 00:06:59.786 { 00:06:59.786 "nbd_device": "/dev/nbd1", 00:06:59.786 "bdev_name": "Malloc1" 00:06:59.786 } 00:06:59.786 ]' 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:59.786 { 00:06:59.786 "nbd_device": "/dev/nbd0", 00:06:59.786 "bdev_name": "Malloc0" 00:06:59.786 }, 00:06:59.786 { 00:06:59.786 "nbd_device": "/dev/nbd1", 00:06:59.786 "bdev_name": "Malloc1" 00:06:59.786 } 00:06:59.786 ]' 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:59.786 /dev/nbd1' 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:59.786 /dev/nbd1' 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:59.786 256+0 records in 00:06:59.786 256+0 records out 00:06:59.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00596276 s, 176 MB/s 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:59.786 256+0 records in 00:06:59.786 256+0 records out 00:06:59.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0304557 s, 34.4 MB/s 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:59.786 256+0 records in 00:06:59.786 256+0 records out 00:06:59.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278379 s, 37.7 MB/s 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.786 14:17:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:00.045 14:17:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:00.045 14:17:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:00.045 14:17:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:00.045 14:17:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.045 14:17:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.045 14:17:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:00.045 14:17:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.045 14:17:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.045 14:17:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.045 14:17:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:00.304 14:17:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:00.304 14:17:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:00.304 14:17:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:00.304 14:17:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.304 14:17:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.304 14:17:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:00.304 14:17:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.304 14:17:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.304 14:17:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.304 14:17:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.304 14:17:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.871 14:17:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:00.871 14:17:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.871 14:17:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:00.871 14:17:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:00.871 14:17:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.871 14:17:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:00.871 14:17:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:00.871 14:17:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:00.871 14:17:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:00.871 14:17:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:00.871 14:17:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:00.871 14:17:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:00.871 14:17:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:01.130 14:17:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:02.506 [2024-11-20 14:17:41.125210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:02.506 [2024-11-20 14:17:41.254322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.506 [2024-11-20 14:17:41.254336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.506 [2024-11-20 14:17:41.447384] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:02.506 [2024-11-20 14:17:41.447488] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:04.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:04.451 14:17:43 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58324 /var/tmp/spdk-nbd.sock 00:07:04.451 14:17:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58324 ']' 00:07:04.451 14:17:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:04.451 14:17:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.451 14:17:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:04.451 14:17:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.451 14:17:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:04.451 14:17:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.451 14:17:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:04.451 14:17:43 event.app_repeat -- event/event.sh@39 -- # killprocess 58324 00:07:04.451 14:17:43 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58324 ']' 00:07:04.451 14:17:43 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58324 00:07:04.451 14:17:43 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:04.451 14:17:43 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.451 14:17:43 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58324 00:07:04.451 killing process with pid 58324 00:07:04.451 14:17:43 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.451 14:17:43 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.451 14:17:43 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58324' 00:07:04.451 14:17:43 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58324 00:07:04.451 14:17:43 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58324 00:07:05.386 spdk_app_start is called in Round 0. 00:07:05.386 Shutdown signal received, stop current app iteration 00:07:05.386 Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 reinitialization... 00:07:05.386 spdk_app_start is called in Round 1. 00:07:05.386 Shutdown signal received, stop current app iteration 00:07:05.386 Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 reinitialization... 00:07:05.386 spdk_app_start is called in Round 2. 00:07:05.386 Shutdown signal received, stop current app iteration 00:07:05.386 Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 reinitialization... 00:07:05.386 spdk_app_start is called in Round 3. 00:07:05.386 Shutdown signal received, stop current app iteration 00:07:05.645 ************************************ 00:07:05.645 END TEST app_repeat 00:07:05.645 ************************************ 00:07:05.645 14:17:44 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:05.645 14:17:44 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:05.645 00:07:05.645 real 0m21.971s 00:07:05.645 user 0m48.829s 00:07:05.645 sys 0m3.149s 00:07:05.645 14:17:44 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.645 14:17:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:05.645 14:17:44 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:05.645 14:17:44 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:05.645 14:17:44 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.645 14:17:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.645 14:17:44 event -- common/autotest_common.sh@10 -- # set +x 00:07:05.645 ************************************ 00:07:05.645 START TEST cpu_locks 00:07:05.645 ************************************ 00:07:05.645 14:17:44 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:05.645 * Looking for test storage... 00:07:05.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:05.645 14:17:44 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:05.645 14:17:44 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:05.645 14:17:44 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:05.646 14:17:44 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.646 14:17:44 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:05.646 14:17:44 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.646 14:17:44 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:05.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.646 --rc genhtml_branch_coverage=1 00:07:05.646 --rc genhtml_function_coverage=1 00:07:05.646 --rc genhtml_legend=1 00:07:05.646 --rc geninfo_all_blocks=1 00:07:05.646 --rc geninfo_unexecuted_blocks=1 00:07:05.646 00:07:05.646 ' 00:07:05.646 14:17:44 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:05.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.646 --rc genhtml_branch_coverage=1 00:07:05.646 --rc genhtml_function_coverage=1 00:07:05.646 --rc genhtml_legend=1 00:07:05.646 --rc geninfo_all_blocks=1 00:07:05.646 --rc geninfo_unexecuted_blocks=1 00:07:05.646 00:07:05.646 ' 00:07:05.646 14:17:44 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:05.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.646 --rc genhtml_branch_coverage=1 00:07:05.646 --rc genhtml_function_coverage=1 00:07:05.646 --rc genhtml_legend=1 00:07:05.646 --rc geninfo_all_blocks=1 00:07:05.646 --rc geninfo_unexecuted_blocks=1 00:07:05.646 00:07:05.646 ' 00:07:05.646 14:17:44 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:05.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.646 --rc genhtml_branch_coverage=1 00:07:05.646 --rc genhtml_function_coverage=1 00:07:05.646 --rc genhtml_legend=1 00:07:05.646 --rc geninfo_all_blocks=1 00:07:05.646 --rc geninfo_unexecuted_blocks=1 00:07:05.646 00:07:05.646 ' 00:07:05.646 14:17:44 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:05.646 14:17:44 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:05.646 14:17:44 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:05.646 14:17:44 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:05.646 14:17:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.646 14:17:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.646 14:17:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.905 ************************************ 00:07:05.905 START TEST default_locks 00:07:05.905 ************************************ 00:07:05.905 14:17:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:05.905 14:17:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58801 00:07:05.905 14:17:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58801 00:07:05.905 14:17:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:05.905 14:17:44 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58801 ']' 00:07:05.905 14:17:44 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.905 14:17:44 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.905 14:17:44 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.905 14:17:44 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.905 14:17:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.905 [2024-11-20 14:17:44.771504] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:07:05.905 [2024-11-20 14:17:44.771941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58801 ] 00:07:06.163 [2024-11-20 14:17:44.962394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.163 [2024-11-20 14:17:45.098412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.100 14:17:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.100 14:17:45 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:07.100 14:17:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58801 00:07:07.100 14:17:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.100 14:17:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58801 00:07:07.667 14:17:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58801 00:07:07.667 14:17:46 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58801 ']' 00:07:07.667 14:17:46 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58801 00:07:07.667 14:17:46 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:07.667 14:17:46 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.667 14:17:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58801 00:07:07.667 killing process with pid 58801 00:07:07.667 14:17:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.667 14:17:46 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.667 14:17:46 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58801' 00:07:07.667 14:17:46 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58801 00:07:07.667 14:17:46 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58801 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58801 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58801 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58801 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58801 ']' 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.232 ERROR: process (pid: 58801) is no longer running 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.232 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58801) - No such process 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:10.232 00:07:10.232 real 0m4.173s 00:07:10.232 user 0m4.200s 00:07:10.232 sys 0m0.795s 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.232 14:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.232 ************************************ 00:07:10.232 END TEST default_locks 00:07:10.232 ************************************ 00:07:10.232 14:17:48 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:10.232 14:17:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.232 14:17:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.232 14:17:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.232 ************************************ 00:07:10.232 START TEST default_locks_via_rpc 00:07:10.232 ************************************ 00:07:10.232 14:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:10.232 14:17:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58876 00:07:10.232 14:17:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58876 00:07:10.232 14:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58876 ']' 00:07:10.232 14:17:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:10.232 14:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.232 14:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.232 14:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.232 14:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.232 14:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.232 [2024-11-20 14:17:48.997151] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:07:10.232 [2024-11-20 14:17:48.997375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58876 ] 00:07:10.232 [2024-11-20 14:17:49.187325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.489 [2024-11-20 14:17:49.326048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.422 14:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.422 14:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:11.422 14:17:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:11.422 14:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.422 14:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.422 14:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.422 14:17:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:11.422 14:17:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:11.422 14:17:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:11.422 14:17:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:11.422 14:17:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:11.422 14:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.422 14:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.422 14:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.422 14:17:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58876 00:07:11.422 14:17:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58876 00:07:11.422 14:17:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.990 14:17:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58876 00:07:11.990 14:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58876 ']' 00:07:11.990 14:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58876 00:07:11.990 14:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:11.990 14:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.990 14:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58876 00:07:11.990 killing process with pid 58876 00:07:11.990 14:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.990 14:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.990 14:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58876' 00:07:11.990 14:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58876 00:07:11.990 14:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58876 00:07:14.551 00:07:14.551 real 0m4.107s 00:07:14.551 user 0m4.112s 00:07:14.551 sys 0m0.758s 00:07:14.551 14:17:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.551 ************************************ 00:07:14.551 END TEST default_locks_via_rpc 00:07:14.551 ************************************ 00:07:14.551 14:17:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.551 14:17:53 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:14.551 14:17:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.551 14:17:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.551 14:17:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.551 ************************************ 00:07:14.551 START TEST non_locking_app_on_locked_coremask 00:07:14.551 ************************************ 00:07:14.551 14:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:14.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.551 14:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58951 00:07:14.551 14:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58951 /var/tmp/spdk.sock 00:07:14.551 14:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:14.551 14:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58951 ']' 00:07:14.551 14:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.551 14:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.551 14:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.551 14:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.551 14:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.551 [2024-11-20 14:17:53.162103] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:07:14.551 [2024-11-20 14:17:53.162604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58951 ] 00:07:14.551 [2024-11-20 14:17:53.347468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.551 [2024-11-20 14:17:53.487018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.487 14:17:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.487 14:17:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:15.487 14:17:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58968 00:07:15.487 14:17:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:15.487 14:17:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58968 /var/tmp/spdk2.sock 00:07:15.487 14:17:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58968 ']' 00:07:15.487 14:17:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.487 14:17:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.487 14:17:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.487 14:17:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.487 14:17:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.810 [2024-11-20 14:17:54.603760] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:07:15.810 [2024-11-20 14:17:54.604875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58968 ] 00:07:16.068 [2024-11-20 14:17:54.814259] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:16.068 [2024-11-20 14:17:54.814340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.327 [2024-11-20 14:17:55.088035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.855 14:17:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.855 14:17:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:18.855 14:17:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58951 00:07:18.856 14:17:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58951 00:07:18.856 14:17:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:19.423 14:17:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58951 00:07:19.423 14:17:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58951 ']' 00:07:19.423 14:17:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58951 00:07:19.423 14:17:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:19.423 14:17:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.423 14:17:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58951 00:07:19.423 14:17:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.423 14:17:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.423 killing process with pid 58951 00:07:19.423 14:17:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58951' 00:07:19.423 14:17:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58951 00:07:19.423 14:17:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58951 00:07:23.610 14:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58968 00:07:23.610 14:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58968 ']' 00:07:23.610 14:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58968 00:07:23.610 14:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:23.610 14:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.610 14:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58968 00:07:23.869 killing process with pid 58968 00:07:23.869 14:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.869 14:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.869 14:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58968' 00:07:23.869 14:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58968 00:07:23.869 14:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58968 00:07:25.865 00:07:25.865 real 0m11.703s 00:07:25.865 user 0m12.296s 00:07:25.865 sys 0m1.715s 00:07:25.865 14:18:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.865 ************************************ 00:07:25.865 END TEST non_locking_app_on_locked_coremask 00:07:25.865 ************************************ 00:07:25.865 14:18:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.865 14:18:04 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:25.865 14:18:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.865 14:18:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.865 14:18:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.865 ************************************ 00:07:25.865 START TEST locking_app_on_unlocked_coremask 00:07:25.865 ************************************ 00:07:25.865 14:18:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:25.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.865 14:18:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59117 00:07:25.865 14:18:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:25.865 14:18:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59117 /var/tmp/spdk.sock 00:07:25.865 14:18:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59117 ']' 00:07:25.865 14:18:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.865 14:18:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.865 14:18:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.865 14:18:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.865 14:18:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.124 [2024-11-20 14:18:04.911569] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:07:26.124 [2024-11-20 14:18:04.912031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59117 ] 00:07:26.124 [2024-11-20 14:18:05.095464] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:26.124 [2024-11-20 14:18:05.095874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.382 [2024-11-20 14:18:05.231329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.316 14:18:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.316 14:18:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:27.316 14:18:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59139 00:07:27.316 14:18:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:27.316 14:18:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59139 /var/tmp/spdk2.sock 00:07:27.316 14:18:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59139 ']' 00:07:27.316 14:18:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:27.316 14:18:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:27.316 14:18:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:27.316 14:18:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.316 14:18:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.316 [2024-11-20 14:18:06.187064] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:07:27.316 [2024-11-20 14:18:06.187591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59139 ] 00:07:27.574 [2024-11-20 14:18:06.387817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.833 [2024-11-20 14:18:06.634401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.368 14:18:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.368 14:18:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:30.368 14:18:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59139 00:07:30.368 14:18:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59139 00:07:30.368 14:18:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:30.628 14:18:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59117 00:07:30.628 14:18:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59117 ']' 00:07:30.628 14:18:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59117 00:07:30.628 14:18:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:30.628 14:18:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.628 14:18:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59117 00:07:30.628 killing process with pid 59117 00:07:30.628 14:18:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.628 14:18:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.628 14:18:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59117' 00:07:30.628 14:18:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59117 00:07:30.628 14:18:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59117 00:07:35.898 14:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59139 00:07:35.898 14:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59139 ']' 00:07:35.899 14:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59139 00:07:35.899 14:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:35.899 14:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.899 14:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59139 00:07:35.899 killing process with pid 59139 00:07:35.899 14:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.899 14:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.899 14:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59139' 00:07:35.899 14:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59139 00:07:35.899 14:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59139 00:07:37.278 ************************************ 00:07:37.278 END TEST locking_app_on_unlocked_coremask 00:07:37.278 ************************************ 00:07:37.278 00:07:37.278 real 0m11.328s 00:07:37.278 user 0m11.747s 00:07:37.278 sys 0m1.472s 00:07:37.278 14:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.278 14:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:37.278 14:18:16 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:37.278 14:18:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.278 14:18:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.278 14:18:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:37.278 ************************************ 00:07:37.278 START TEST locking_app_on_locked_coremask 00:07:37.278 ************************************ 00:07:37.278 14:18:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:37.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.278 14:18:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59284 00:07:37.278 14:18:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59284 /var/tmp/spdk.sock 00:07:37.278 14:18:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59284 ']' 00:07:37.278 14:18:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.278 14:18:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:37.278 14:18:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.278 14:18:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.278 14:18:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.278 14:18:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:37.539 [2024-11-20 14:18:16.298723] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:07:37.539 [2024-11-20 14:18:16.298920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59284 ] 00:07:37.539 [2024-11-20 14:18:16.484741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.798 [2024-11-20 14:18:16.621840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.736 14:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.736 14:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:38.736 14:18:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59306 00:07:38.736 14:18:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:38.736 14:18:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59306 /var/tmp/spdk2.sock 00:07:38.736 14:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:38.736 14:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59306 /var/tmp/spdk2.sock 00:07:38.736 14:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:38.736 14:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.736 14:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:38.736 14:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.736 14:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59306 /var/tmp/spdk2.sock 00:07:38.736 14:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59306 ']' 00:07:38.736 14:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:38.736 14:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.736 14:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:38.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:38.736 14:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.736 14:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:38.736 [2024-11-20 14:18:17.635480] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:07:38.736 [2024-11-20 14:18:17.635958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59306 ] 00:07:38.995 [2024-11-20 14:18:17.836898] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59284 has claimed it. 00:07:38.995 [2024-11-20 14:18:17.836973] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:39.562 ERROR: process (pid: 59306) is no longer running 00:07:39.562 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59306) - No such process 00:07:39.562 14:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.562 14:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:39.563 14:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:39.563 14:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:39.563 14:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:39.563 14:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:39.563 14:18:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59284 00:07:39.563 14:18:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59284 00:07:39.563 14:18:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:40.130 14:18:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59284 00:07:40.130 14:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59284 ']' 00:07:40.130 14:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59284 00:07:40.130 14:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:40.130 14:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.130 14:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59284 00:07:40.130 killing process with pid 59284 00:07:40.130 14:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.130 14:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.130 14:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59284' 00:07:40.131 14:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59284 00:07:40.131 14:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59284 00:07:42.128 00:07:42.128 real 0m4.889s 00:07:42.128 user 0m5.227s 00:07:42.128 sys 0m0.943s 00:07:42.128 14:18:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.128 ************************************ 00:07:42.128 14:18:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.128 END TEST locking_app_on_locked_coremask 00:07:42.128 ************************************ 00:07:42.128 14:18:21 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:42.128 14:18:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.128 14:18:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.128 14:18:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:42.386 ************************************ 00:07:42.386 START TEST locking_overlapped_coremask 00:07:42.386 ************************************ 00:07:42.386 14:18:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:42.386 14:18:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59375 00:07:42.387 14:18:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59375 /var/tmp/spdk.sock 00:07:42.387 14:18:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59375 ']' 00:07:42.387 14:18:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:42.387 14:18:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.387 14:18:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.387 14:18:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.387 14:18:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.387 14:18:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.387 [2024-11-20 14:18:21.236334] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:07:42.387 [2024-11-20 14:18:21.236530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59375 ] 00:07:42.645 [2024-11-20 14:18:21.421651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:42.645 [2024-11-20 14:18:21.552626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.645 [2024-11-20 14:18:21.552762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.645 [2024-11-20 14:18:21.552791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.582 14:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.582 14:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:43.582 14:18:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59400 00:07:43.582 14:18:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:43.582 14:18:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59400 /var/tmp/spdk2.sock 00:07:43.582 14:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:43.582 14:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59400 /var/tmp/spdk2.sock 00:07:43.582 14:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:43.582 14:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.582 14:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:43.582 14:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.582 14:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59400 /var/tmp/spdk2.sock 00:07:43.582 14:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59400 ']' 00:07:43.582 14:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:43.582 14:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.582 14:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:43.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:43.582 14:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.582 14:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.842 [2024-11-20 14:18:22.580772] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:07:43.842 [2024-11-20 14:18:22.581369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59400 ] 00:07:43.842 [2024-11-20 14:18:22.802042] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59375 has claimed it. 00:07:43.842 [2024-11-20 14:18:22.806099] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:44.409 ERROR: process (pid: 59400) is no longer running 00:07:44.409 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59400) - No such process 00:07:44.409 14:18:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.409 14:18:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:44.409 14:18:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:44.409 14:18:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:44.409 14:18:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:44.409 14:18:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:44.409 14:18:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:44.409 14:18:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:44.409 14:18:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:44.409 14:18:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:44.409 14:18:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59375 00:07:44.409 14:18:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59375 ']' 00:07:44.409 14:18:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59375 00:07:44.409 14:18:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:44.409 14:18:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.409 14:18:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59375 00:07:44.409 14:18:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.409 14:18:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.409 14:18:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59375' 00:07:44.409 killing process with pid 59375 00:07:44.409 14:18:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59375 00:07:44.409 14:18:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59375 00:07:47.008 00:07:47.008 real 0m4.444s 00:07:47.008 user 0m12.084s 00:07:47.008 sys 0m0.779s 00:07:47.008 14:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.008 14:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.008 ************************************ 00:07:47.008 END TEST locking_overlapped_coremask 00:07:47.008 ************************************ 00:07:47.008 14:18:25 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:47.008 14:18:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.008 14:18:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.008 14:18:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:47.008 ************************************ 00:07:47.008 START TEST locking_overlapped_coremask_via_rpc 00:07:47.008 ************************************ 00:07:47.008 14:18:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:47.008 14:18:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59464 00:07:47.008 14:18:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:47.008 14:18:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59464 /var/tmp/spdk.sock 00:07:47.008 14:18:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59464 ']' 00:07:47.008 14:18:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.008 14:18:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.008 14:18:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.008 14:18:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.008 14:18:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.008 [2024-11-20 14:18:25.715284] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:07:47.009 [2024-11-20 14:18:25.715687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59464 ] 00:07:47.009 [2024-11-20 14:18:25.893770] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:47.009 [2024-11-20 14:18:25.894086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:47.268 [2024-11-20 14:18:26.031480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.268 [2024-11-20 14:18:26.031562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.268 [2024-11-20 14:18:26.031569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.206 14:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.206 14:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:48.206 14:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:48.206 14:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59482 00:07:48.206 14:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59482 /var/tmp/spdk2.sock 00:07:48.206 14:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59482 ']' 00:07:48.206 14:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:48.206 14:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.206 14:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:48.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:48.206 14:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.206 14:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.206 [2024-11-20 14:18:27.013574] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:07:48.206 [2024-11-20 14:18:27.014509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59482 ] 00:07:48.465 [2024-11-20 14:18:27.209719] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:48.465 [2024-11-20 14:18:27.209784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:48.724 [2024-11-20 14:18:27.495148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.724 [2024-11-20 14:18:27.495226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.724 [2024-11-20 14:18:27.495248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.256 [2024-11-20 14:18:29.841187] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59464 has claimed it. 00:07:51.256 request: 00:07:51.256 { 00:07:51.256 "method": "framework_enable_cpumask_locks", 00:07:51.256 "req_id": 1 00:07:51.256 } 00:07:51.256 Got JSON-RPC error response 00:07:51.256 response: 00:07:51.256 { 00:07:51.256 "code": -32603, 00:07:51.256 "message": "Failed to claim CPU core: 2" 00:07:51.256 } 00:07:51.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59464 /var/tmp/spdk.sock 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59464 ']' 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.256 14:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.256 14:18:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.256 14:18:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:51.256 14:18:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59482 /var/tmp/spdk2.sock 00:07:51.256 14:18:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59482 ']' 00:07:51.256 14:18:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:51.256 14:18:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.256 14:18:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:51.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:51.256 14:18:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.256 14:18:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.515 14:18:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.515 14:18:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:51.515 14:18:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:51.515 14:18:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:51.515 14:18:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:51.515 14:18:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:51.515 00:07:51.515 real 0m4.808s 00:07:51.515 user 0m1.837s 00:07:51.515 sys 0m0.242s 00:07:51.515 14:18:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.515 ************************************ 00:07:51.515 END TEST locking_overlapped_coremask_via_rpc 00:07:51.515 ************************************ 00:07:51.515 14:18:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.515 14:18:30 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:51.515 14:18:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59464 ]] 00:07:51.515 14:18:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59464 00:07:51.515 14:18:30 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59464 ']' 00:07:51.516 14:18:30 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59464 00:07:51.516 14:18:30 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:51.516 14:18:30 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.516 14:18:30 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59464 00:07:51.516 killing process with pid 59464 00:07:51.516 14:18:30 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.516 14:18:30 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.516 14:18:30 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59464' 00:07:51.516 14:18:30 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59464 00:07:51.516 14:18:30 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59464 00:07:54.048 14:18:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59482 ]] 00:07:54.048 14:18:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59482 00:07:54.048 14:18:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59482 ']' 00:07:54.048 14:18:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59482 00:07:54.048 14:18:32 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:54.049 14:18:32 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.049 14:18:32 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59482 00:07:54.049 killing process with pid 59482 00:07:54.049 14:18:32 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:54.049 14:18:32 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:54.049 14:18:32 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59482' 00:07:54.049 14:18:32 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59482 00:07:54.049 14:18:32 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59482 00:07:56.578 14:18:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:56.578 14:18:34 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:56.578 14:18:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59464 ]] 00:07:56.578 14:18:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59464 00:07:56.578 Process with pid 59464 is not found 00:07:56.578 Process with pid 59482 is not found 00:07:56.578 14:18:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59464 ']' 00:07:56.578 14:18:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59464 00:07:56.578 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59464) - No such process 00:07:56.578 14:18:34 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59464 is not found' 00:07:56.578 14:18:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59482 ]] 00:07:56.578 14:18:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59482 00:07:56.578 14:18:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59482 ']' 00:07:56.578 14:18:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59482 00:07:56.578 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59482) - No such process 00:07:56.578 14:18:34 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59482 is not found' 00:07:56.578 14:18:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:56.578 ************************************ 00:07:56.578 END TEST cpu_locks 00:07:56.578 ************************************ 00:07:56.578 00:07:56.578 real 0m50.521s 00:07:56.578 user 1m27.413s 00:07:56.578 sys 0m7.943s 00:07:56.578 14:18:34 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.578 14:18:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:56.578 ************************************ 00:07:56.578 END TEST event 00:07:56.578 ************************************ 00:07:56.578 00:07:56.578 real 1m21.649s 00:07:56.578 user 2m31.254s 00:07:56.578 sys 0m12.228s 00:07:56.578 14:18:35 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.578 14:18:35 event -- common/autotest_common.sh@10 -- # set +x 00:07:56.578 14:18:35 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:56.578 14:18:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.578 14:18:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.578 14:18:35 -- common/autotest_common.sh@10 -- # set +x 00:07:56.578 ************************************ 00:07:56.578 START TEST thread 00:07:56.578 ************************************ 00:07:56.578 14:18:35 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:56.578 * Looking for test storage... 00:07:56.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:56.578 14:18:35 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:56.578 14:18:35 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:56.578 14:18:35 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:56.578 14:18:35 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:56.578 14:18:35 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.578 14:18:35 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.578 14:18:35 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.578 14:18:35 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.578 14:18:35 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.578 14:18:35 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.578 14:18:35 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.578 14:18:35 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.578 14:18:35 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.578 14:18:35 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.578 14:18:35 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.578 14:18:35 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:56.578 14:18:35 thread -- scripts/common.sh@345 -- # : 1 00:07:56.578 14:18:35 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.578 14:18:35 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.578 14:18:35 thread -- scripts/common.sh@365 -- # decimal 1 00:07:56.578 14:18:35 thread -- scripts/common.sh@353 -- # local d=1 00:07:56.579 14:18:35 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.579 14:18:35 thread -- scripts/common.sh@355 -- # echo 1 00:07:56.579 14:18:35 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.579 14:18:35 thread -- scripts/common.sh@366 -- # decimal 2 00:07:56.579 14:18:35 thread -- scripts/common.sh@353 -- # local d=2 00:07:56.579 14:18:35 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.579 14:18:35 thread -- scripts/common.sh@355 -- # echo 2 00:07:56.579 14:18:35 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.579 14:18:35 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.579 14:18:35 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.579 14:18:35 thread -- scripts/common.sh@368 -- # return 0 00:07:56.579 14:18:35 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.579 14:18:35 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:56.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.579 --rc genhtml_branch_coverage=1 00:07:56.579 --rc genhtml_function_coverage=1 00:07:56.579 --rc genhtml_legend=1 00:07:56.579 --rc geninfo_all_blocks=1 00:07:56.579 --rc geninfo_unexecuted_blocks=1 00:07:56.579 00:07:56.579 ' 00:07:56.579 14:18:35 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:56.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.579 --rc genhtml_branch_coverage=1 00:07:56.579 --rc genhtml_function_coverage=1 00:07:56.579 --rc genhtml_legend=1 00:07:56.579 --rc geninfo_all_blocks=1 00:07:56.579 --rc geninfo_unexecuted_blocks=1 00:07:56.579 00:07:56.579 ' 00:07:56.579 14:18:35 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:56.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.579 --rc genhtml_branch_coverage=1 00:07:56.579 --rc genhtml_function_coverage=1 00:07:56.579 --rc genhtml_legend=1 00:07:56.579 --rc geninfo_all_blocks=1 00:07:56.579 --rc geninfo_unexecuted_blocks=1 00:07:56.579 00:07:56.579 ' 00:07:56.579 14:18:35 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:56.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.579 --rc genhtml_branch_coverage=1 00:07:56.579 --rc genhtml_function_coverage=1 00:07:56.579 --rc genhtml_legend=1 00:07:56.579 --rc geninfo_all_blocks=1 00:07:56.579 --rc geninfo_unexecuted_blocks=1 00:07:56.579 00:07:56.579 ' 00:07:56.579 14:18:35 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:56.579 14:18:35 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:56.579 14:18:35 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.579 14:18:35 thread -- common/autotest_common.sh@10 -- # set +x 00:07:56.579 ************************************ 00:07:56.579 START TEST thread_poller_perf 00:07:56.579 ************************************ 00:07:56.579 14:18:35 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:56.579 [2024-11-20 14:18:35.299455] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:07:56.579 [2024-11-20 14:18:35.299854] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59677 ] 00:07:56.579 [2024-11-20 14:18:35.506110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.837 [2024-11-20 14:18:35.656450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.837 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:58.214 [2024-11-20T14:18:37.196Z] ====================================== 00:07:58.214 [2024-11-20T14:18:37.196Z] busy:2212217578 (cyc) 00:07:58.214 [2024-11-20T14:18:37.196Z] total_run_count: 299000 00:07:58.214 [2024-11-20T14:18:37.196Z] tsc_hz: 2200000000 (cyc) 00:07:58.214 [2024-11-20T14:18:37.196Z] ====================================== 00:07:58.214 [2024-11-20T14:18:37.196Z] poller_cost: 7398 (cyc), 3362 (nsec) 00:07:58.214 ************************************ 00:07:58.214 END TEST thread_poller_perf 00:07:58.214 ************************************ 00:07:58.214 00:07:58.214 real 0m1.645s 00:07:58.214 user 0m1.412s 00:07:58.214 sys 0m0.122s 00:07:58.214 14:18:36 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.214 14:18:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:58.214 14:18:36 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:58.214 14:18:36 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:58.214 14:18:36 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.214 14:18:36 thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.214 ************************************ 00:07:58.214 START TEST thread_poller_perf 00:07:58.214 ************************************ 00:07:58.214 14:18:36 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:58.214 [2024-11-20 14:18:36.983089] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:07:58.214 [2024-11-20 14:18:36.983408] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59719 ] 00:07:58.214 [2024-11-20 14:18:37.158104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.472 [2024-11-20 14:18:37.303701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.472 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:59.848 [2024-11-20T14:18:38.830Z] ====================================== 00:07:59.848 [2024-11-20T14:18:38.830Z] busy:2204362267 (cyc) 00:07:59.848 [2024-11-20T14:18:38.830Z] total_run_count: 3753000 00:07:59.848 [2024-11-20T14:18:38.830Z] tsc_hz: 2200000000 (cyc) 00:07:59.848 [2024-11-20T14:18:38.830Z] ====================================== 00:07:59.848 [2024-11-20T14:18:38.830Z] poller_cost: 587 (cyc), 266 (nsec) 00:07:59.848 00:07:59.848 real 0m1.589s 00:07:59.848 user 0m1.382s 00:07:59.848 sys 0m0.097s 00:07:59.848 14:18:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.848 14:18:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:59.848 ************************************ 00:07:59.848 END TEST thread_poller_perf 00:07:59.848 ************************************ 00:07:59.848 14:18:38 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:59.848 00:07:59.848 real 0m3.522s 00:07:59.848 user 0m2.933s 00:07:59.848 sys 0m0.361s 00:07:59.848 14:18:38 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.848 14:18:38 thread -- common/autotest_common.sh@10 -- # set +x 00:07:59.848 ************************************ 00:07:59.848 END TEST thread 00:07:59.848 ************************************ 00:07:59.848 14:18:38 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:59.848 14:18:38 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:59.848 14:18:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.848 14:18:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.848 14:18:38 -- common/autotest_common.sh@10 -- # set +x 00:07:59.848 ************************************ 00:07:59.848 START TEST app_cmdline 00:07:59.848 ************************************ 00:07:59.848 14:18:38 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:59.848 * Looking for test storage... 00:07:59.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:59.848 14:18:38 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:59.848 14:18:38 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:59.848 14:18:38 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:59.848 14:18:38 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:59.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.848 14:18:38 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:59.848 14:18:38 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.848 14:18:38 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:59.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.848 --rc genhtml_branch_coverage=1 00:07:59.848 --rc genhtml_function_coverage=1 00:07:59.848 --rc genhtml_legend=1 00:07:59.848 --rc geninfo_all_blocks=1 00:07:59.848 --rc geninfo_unexecuted_blocks=1 00:07:59.848 00:07:59.848 ' 00:07:59.848 14:18:38 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:59.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.848 --rc genhtml_branch_coverage=1 00:07:59.848 --rc genhtml_function_coverage=1 00:07:59.848 --rc genhtml_legend=1 00:07:59.848 --rc geninfo_all_blocks=1 00:07:59.848 --rc geninfo_unexecuted_blocks=1 00:07:59.848 00:07:59.848 ' 00:07:59.848 14:18:38 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:59.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.848 --rc genhtml_branch_coverage=1 00:07:59.848 --rc genhtml_function_coverage=1 00:07:59.848 --rc genhtml_legend=1 00:07:59.848 --rc geninfo_all_blocks=1 00:07:59.848 --rc geninfo_unexecuted_blocks=1 00:07:59.848 00:07:59.848 ' 00:07:59.848 14:18:38 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:59.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.848 --rc genhtml_branch_coverage=1 00:07:59.848 --rc genhtml_function_coverage=1 00:07:59.848 --rc genhtml_legend=1 00:07:59.848 --rc geninfo_all_blocks=1 00:07:59.848 --rc geninfo_unexecuted_blocks=1 00:07:59.848 00:07:59.848 ' 00:07:59.848 14:18:38 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:59.848 14:18:38 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59803 00:07:59.848 14:18:38 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59803 00:07:59.848 14:18:38 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59803 ']' 00:07:59.848 14:18:38 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:59.848 14:18:38 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.848 14:18:38 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.849 14:18:38 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.849 14:18:38 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.849 14:18:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:00.107 [2024-11-20 14:18:38.890002] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:08:00.107 [2024-11-20 14:18:38.890447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59803 ] 00:08:00.107 [2024-11-20 14:18:39.066466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.365 [2024-11-20 14:18:39.196107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.303 14:18:40 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.303 14:18:40 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:01.303 14:18:40 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:01.563 { 00:08:01.563 "version": "SPDK v25.01-pre git sha1 5c8d99223", 00:08:01.563 "fields": { 00:08:01.563 "major": 25, 00:08:01.563 "minor": 1, 00:08:01.563 "patch": 0, 00:08:01.563 "suffix": "-pre", 00:08:01.563 "commit": "5c8d99223" 00:08:01.563 } 00:08:01.563 } 00:08:01.563 14:18:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:01.563 14:18:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:01.563 14:18:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:01.563 14:18:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:01.563 14:18:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:01.563 14:18:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:01.563 14:18:40 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.563 14:18:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:01.563 14:18:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:01.563 14:18:40 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.563 14:18:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:01.563 14:18:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:01.563 14:18:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:01.563 14:18:40 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:01.563 14:18:40 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:01.563 14:18:40 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.563 14:18:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.563 14:18:40 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.563 14:18:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.563 14:18:40 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.563 14:18:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.563 14:18:40 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.563 14:18:40 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:01.563 14:18:40 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:01.822 request: 00:08:01.822 { 00:08:01.822 "method": "env_dpdk_get_mem_stats", 00:08:01.822 "req_id": 1 00:08:01.822 } 00:08:01.822 Got JSON-RPC error response 00:08:01.822 response: 00:08:01.822 { 00:08:01.822 "code": -32601, 00:08:01.822 "message": "Method not found" 00:08:01.822 } 00:08:01.822 14:18:40 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:01.822 14:18:40 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:01.822 14:18:40 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:01.822 14:18:40 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:01.822 14:18:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59803 00:08:01.822 14:18:40 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59803 ']' 00:08:01.822 14:18:40 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59803 00:08:01.822 14:18:40 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:01.822 14:18:40 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.822 14:18:40 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59803 00:08:01.822 killing process with pid 59803 00:08:01.822 14:18:40 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.822 14:18:40 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.822 14:18:40 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59803' 00:08:01.822 14:18:40 app_cmdline -- common/autotest_common.sh@973 -- # kill 59803 00:08:01.822 14:18:40 app_cmdline -- common/autotest_common.sh@978 -- # wait 59803 00:08:04.378 00:08:04.378 real 0m4.373s 00:08:04.378 user 0m4.903s 00:08:04.378 sys 0m0.661s 00:08:04.378 14:18:42 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.378 ************************************ 00:08:04.378 END TEST app_cmdline 00:08:04.378 ************************************ 00:08:04.378 14:18:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:04.378 14:18:43 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:04.378 14:18:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.378 14:18:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.378 14:18:43 -- common/autotest_common.sh@10 -- # set +x 00:08:04.378 ************************************ 00:08:04.378 START TEST version 00:08:04.378 ************************************ 00:08:04.378 14:18:43 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:04.378 * Looking for test storage... 00:08:04.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:04.378 14:18:43 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:04.378 14:18:43 version -- common/autotest_common.sh@1693 -- # lcov --version 00:08:04.378 14:18:43 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:04.378 14:18:43 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:04.378 14:18:43 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.378 14:18:43 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.378 14:18:43 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.378 14:18:43 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.378 14:18:43 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.378 14:18:43 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.378 14:18:43 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.378 14:18:43 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.378 14:18:43 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.378 14:18:43 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.378 14:18:43 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.378 14:18:43 version -- scripts/common.sh@344 -- # case "$op" in 00:08:04.378 14:18:43 version -- scripts/common.sh@345 -- # : 1 00:08:04.378 14:18:43 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.378 14:18:43 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.378 14:18:43 version -- scripts/common.sh@365 -- # decimal 1 00:08:04.379 14:18:43 version -- scripts/common.sh@353 -- # local d=1 00:08:04.379 14:18:43 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.379 14:18:43 version -- scripts/common.sh@355 -- # echo 1 00:08:04.379 14:18:43 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.379 14:18:43 version -- scripts/common.sh@366 -- # decimal 2 00:08:04.379 14:18:43 version -- scripts/common.sh@353 -- # local d=2 00:08:04.379 14:18:43 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.379 14:18:43 version -- scripts/common.sh@355 -- # echo 2 00:08:04.379 14:18:43 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.379 14:18:43 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.379 14:18:43 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.379 14:18:43 version -- scripts/common.sh@368 -- # return 0 00:08:04.379 14:18:43 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.379 14:18:43 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:04.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.379 --rc genhtml_branch_coverage=1 00:08:04.379 --rc genhtml_function_coverage=1 00:08:04.379 --rc genhtml_legend=1 00:08:04.379 --rc geninfo_all_blocks=1 00:08:04.379 --rc geninfo_unexecuted_blocks=1 00:08:04.379 00:08:04.379 ' 00:08:04.379 14:18:43 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:04.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.379 --rc genhtml_branch_coverage=1 00:08:04.379 --rc genhtml_function_coverage=1 00:08:04.379 --rc genhtml_legend=1 00:08:04.379 --rc geninfo_all_blocks=1 00:08:04.379 --rc geninfo_unexecuted_blocks=1 00:08:04.379 00:08:04.379 ' 00:08:04.379 14:18:43 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:04.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.379 --rc genhtml_branch_coverage=1 00:08:04.379 --rc genhtml_function_coverage=1 00:08:04.379 --rc genhtml_legend=1 00:08:04.379 --rc geninfo_all_blocks=1 00:08:04.379 --rc geninfo_unexecuted_blocks=1 00:08:04.379 00:08:04.379 ' 00:08:04.379 14:18:43 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:04.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.379 --rc genhtml_branch_coverage=1 00:08:04.379 --rc genhtml_function_coverage=1 00:08:04.379 --rc genhtml_legend=1 00:08:04.379 --rc geninfo_all_blocks=1 00:08:04.379 --rc geninfo_unexecuted_blocks=1 00:08:04.379 00:08:04.379 ' 00:08:04.379 14:18:43 version -- app/version.sh@17 -- # get_header_version major 00:08:04.379 14:18:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:04.379 14:18:43 version -- app/version.sh@14 -- # cut -f2 00:08:04.379 14:18:43 version -- app/version.sh@14 -- # tr -d '"' 00:08:04.379 14:18:43 version -- app/version.sh@17 -- # major=25 00:08:04.379 14:18:43 version -- app/version.sh@18 -- # get_header_version minor 00:08:04.379 14:18:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:04.379 14:18:43 version -- app/version.sh@14 -- # cut -f2 00:08:04.379 14:18:43 version -- app/version.sh@14 -- # tr -d '"' 00:08:04.379 14:18:43 version -- app/version.sh@18 -- # minor=1 00:08:04.379 14:18:43 version -- app/version.sh@19 -- # get_header_version patch 00:08:04.379 14:18:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:04.379 14:18:43 version -- app/version.sh@14 -- # tr -d '"' 00:08:04.379 14:18:43 version -- app/version.sh@14 -- # cut -f2 00:08:04.379 14:18:43 version -- app/version.sh@19 -- # patch=0 00:08:04.379 14:18:43 version -- app/version.sh@20 -- # get_header_version suffix 00:08:04.379 14:18:43 version -- app/version.sh@14 -- # cut -f2 00:08:04.379 14:18:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:04.379 14:18:43 version -- app/version.sh@14 -- # tr -d '"' 00:08:04.379 14:18:43 version -- app/version.sh@20 -- # suffix=-pre 00:08:04.379 14:18:43 version -- app/version.sh@22 -- # version=25.1 00:08:04.379 14:18:43 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:04.379 14:18:43 version -- app/version.sh@28 -- # version=25.1rc0 00:08:04.379 14:18:43 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:04.379 14:18:43 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:04.379 14:18:43 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:04.379 14:18:43 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:04.379 00:08:04.379 real 0m0.265s 00:08:04.379 user 0m0.171s 00:08:04.379 sys 0m0.133s 00:08:04.379 ************************************ 00:08:04.379 END TEST version 00:08:04.379 ************************************ 00:08:04.379 14:18:43 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.379 14:18:43 version -- common/autotest_common.sh@10 -- # set +x 00:08:04.379 14:18:43 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:04.379 14:18:43 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:08:04.379 14:18:43 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:04.379 14:18:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.379 14:18:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.379 14:18:43 -- common/autotest_common.sh@10 -- # set +x 00:08:04.638 ************************************ 00:08:04.638 START TEST bdev_raid 00:08:04.638 ************************************ 00:08:04.638 14:18:43 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:04.638 * Looking for test storage... 00:08:04.638 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:04.638 14:18:43 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:04.638 14:18:43 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:08:04.638 14:18:43 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:04.638 14:18:43 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:04.638 14:18:43 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.638 14:18:43 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.638 14:18:43 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.638 14:18:43 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.638 14:18:43 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.638 14:18:43 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.638 14:18:43 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.638 14:18:43 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.638 14:18:43 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.638 14:18:43 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.638 14:18:43 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.638 14:18:43 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:08:04.638 14:18:43 bdev_raid -- scripts/common.sh@345 -- # : 1 00:08:04.638 14:18:43 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.638 14:18:43 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.638 14:18:43 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:08:04.638 14:18:43 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:08:04.639 14:18:43 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.639 14:18:43 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:08:04.639 14:18:43 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.639 14:18:43 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:08:04.639 14:18:43 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:08:04.639 14:18:43 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.639 14:18:43 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:08:04.639 14:18:43 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.639 14:18:43 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.639 14:18:43 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.639 14:18:43 bdev_raid -- scripts/common.sh@368 -- # return 0 00:08:04.639 14:18:43 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.639 14:18:43 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:04.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.639 --rc genhtml_branch_coverage=1 00:08:04.639 --rc genhtml_function_coverage=1 00:08:04.639 --rc genhtml_legend=1 00:08:04.639 --rc geninfo_all_blocks=1 00:08:04.639 --rc geninfo_unexecuted_blocks=1 00:08:04.639 00:08:04.639 ' 00:08:04.639 14:18:43 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:04.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.639 --rc genhtml_branch_coverage=1 00:08:04.639 --rc genhtml_function_coverage=1 00:08:04.639 --rc genhtml_legend=1 00:08:04.639 --rc geninfo_all_blocks=1 00:08:04.639 --rc geninfo_unexecuted_blocks=1 00:08:04.639 00:08:04.639 ' 00:08:04.639 14:18:43 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:04.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.639 --rc genhtml_branch_coverage=1 00:08:04.639 --rc genhtml_function_coverage=1 00:08:04.639 --rc genhtml_legend=1 00:08:04.639 --rc geninfo_all_blocks=1 00:08:04.639 --rc geninfo_unexecuted_blocks=1 00:08:04.639 00:08:04.639 ' 00:08:04.639 14:18:43 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:04.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.639 --rc genhtml_branch_coverage=1 00:08:04.639 --rc genhtml_function_coverage=1 00:08:04.639 --rc genhtml_legend=1 00:08:04.639 --rc geninfo_all_blocks=1 00:08:04.639 --rc geninfo_unexecuted_blocks=1 00:08:04.639 00:08:04.639 ' 00:08:04.639 14:18:43 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:04.639 14:18:43 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:08:04.639 14:18:43 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:08:04.639 14:18:43 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:08:04.639 14:18:43 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:08:04.639 14:18:43 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:08:04.639 14:18:43 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:08:04.639 14:18:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.639 14:18:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.639 14:18:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:04.639 ************************************ 00:08:04.639 START TEST raid1_resize_data_offset_test 00:08:04.639 ************************************ 00:08:04.639 14:18:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:08:04.639 14:18:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59990 00:08:04.639 14:18:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:04.639 14:18:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59990' 00:08:04.639 Process raid pid: 59990 00:08:04.639 14:18:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59990 00:08:04.639 14:18:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59990 ']' 00:08:04.639 14:18:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.639 14:18:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.639 14:18:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.639 14:18:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.639 14:18:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.897 [2024-11-20 14:18:43.650156] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:08:04.897 [2024-11-20 14:18:43.650569] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.897 [2024-11-20 14:18:43.825710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.157 [2024-11-20 14:18:43.958626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.416 [2024-11-20 14:18:44.165455] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.416 [2024-11-20 14:18:44.165644] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.675 14:18:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.675 14:18:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:08:05.675 14:18:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:08:05.675 14:18:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.675 14:18:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.935 malloc0 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.935 malloc1 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.935 null0 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.935 [2024-11-20 14:18:44.790105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:08:05.935 [2024-11-20 14:18:44.792620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:05.935 [2024-11-20 14:18:44.792707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:08:05.935 [2024-11-20 14:18:44.792927] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:05.935 [2024-11-20 14:18:44.792950] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:08:05.935 [2024-11-20 14:18:44.793348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:05.935 [2024-11-20 14:18:44.793575] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:05.935 [2024-11-20 14:18:44.793595] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:05.935 [2024-11-20 14:18:44.793812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.935 [2024-11-20 14:18:44.846149] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.935 14:18:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.504 malloc2 00:08:06.504 14:18:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.504 14:18:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:08:06.504 14:18:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.504 14:18:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.504 [2024-11-20 14:18:45.379012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:06.504 [2024-11-20 14:18:45.397076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:06.504 14:18:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.504 [2024-11-20 14:18:45.400192] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:08:06.504 14:18:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.504 14:18:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:06.504 14:18:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.504 14:18:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.504 14:18:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.504 14:18:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:08:06.504 14:18:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59990 00:08:06.504 14:18:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59990 ']' 00:08:06.504 14:18:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59990 00:08:06.504 14:18:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:08:06.504 14:18:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.504 14:18:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59990 00:08:06.763 killing process with pid 59990 00:08:06.763 14:18:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.763 14:18:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.763 14:18:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59990' 00:08:06.763 14:18:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59990 00:08:06.763 [2024-11-20 14:18:45.489645] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:06.763 14:18:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59990 00:08:06.763 [2024-11-20 14:18:45.491897] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:08:06.763 [2024-11-20 14:18:45.492232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.763 [2024-11-20 14:18:45.492267] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:08:06.763 [2024-11-20 14:18:45.523113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.763 [2024-11-20 14:18:45.523838] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:06.763 [2024-11-20 14:18:45.523872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:08.139 [2024-11-20 14:18:47.089343] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:09.512 14:18:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:08:09.512 00:08:09.512 real 0m4.563s 00:08:09.512 user 0m4.512s 00:08:09.512 sys 0m0.625s 00:08:09.512 14:18:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.512 14:18:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.512 ************************************ 00:08:09.512 END TEST raid1_resize_data_offset_test 00:08:09.512 ************************************ 00:08:09.512 14:18:48 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:08:09.512 14:18:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:09.512 14:18:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.512 14:18:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:09.512 ************************************ 00:08:09.512 START TEST raid0_resize_superblock_test 00:08:09.512 ************************************ 00:08:09.512 14:18:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:08:09.512 14:18:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:08:09.512 Process raid pid: 60074 00:08:09.512 14:18:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60074 00:08:09.512 14:18:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60074' 00:08:09.512 14:18:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:09.512 14:18:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60074 00:08:09.512 14:18:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60074 ']' 00:08:09.512 14:18:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.512 14:18:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.512 14:18:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.512 14:18:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.512 14:18:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.512 [2024-11-20 14:18:48.280795] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:08:09.512 [2024-11-20 14:18:48.281230] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.512 [2024-11-20 14:18:48.470687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.795 [2024-11-20 14:18:48.599318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.053 [2024-11-20 14:18:48.810006] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.053 [2024-11-20 14:18:48.810081] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.311 14:18:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.311 14:18:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:10.311 14:18:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:10.311 14:18:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.311 14:18:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.878 malloc0 00:08:10.878 14:18:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.878 14:18:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:10.878 14:18:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.878 14:18:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.878 [2024-11-20 14:18:49.833662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:10.878 [2024-11-20 14:18:49.833736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:10.878 [2024-11-20 14:18:49.833768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:10.878 [2024-11-20 14:18:49.833789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:10.878 [2024-11-20 14:18:49.836554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:10.878 [2024-11-20 14:18:49.836604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:10.878 pt0 00:08:10.878 14:18:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.878 14:18:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:10.878 14:18:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.878 14:18:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.134 60ac3a00-406b-46d4-8edd-bb90f960d3f7 00:08:11.134 14:18:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.135 14:18:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:11.135 14:18:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.135 14:18:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.135 50e31570-3127-4c19-b31d-85d5d75d4f1b 00:08:11.135 14:18:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.135 14:18:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:11.135 14:18:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.135 14:18:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.135 2e3fc039-9307-4aee-bd36-1719a4dd231e 00:08:11.135 14:18:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.135 14:18:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:11.135 14:18:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:11.135 14:18:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.135 14:18:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.135 [2024-11-20 14:18:49.976716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 50e31570-3127-4c19-b31d-85d5d75d4f1b is claimed 00:08:11.135 [2024-11-20 14:18:49.976830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2e3fc039-9307-4aee-bd36-1719a4dd231e is claimed 00:08:11.135 [2024-11-20 14:18:49.977044] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:11.135 [2024-11-20 14:18:49.977071] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:08:11.135 [2024-11-20 14:18:49.977399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:11.135 [2024-11-20 14:18:49.977650] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:11.135 [2024-11-20 14:18:49.977667] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:11.135 [2024-11-20 14:18:49.977875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.135 14:18:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.135 14:18:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:11.135 14:18:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.135 14:18:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.135 14:18:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:11.135 14:18:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.135 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:11.135 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:11.135 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.135 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:11.135 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.135 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.135 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:11.135 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:11.135 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:11.135 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.135 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.135 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:11.135 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:08:11.135 [2024-11-20 14:18:50.097077] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.135 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.394 [2024-11-20 14:18:50.137039] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:11.394 [2024-11-20 14:18:50.137070] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '50e31570-3127-4c19-b31d-85d5d75d4f1b' was resized: old size 131072, new size 204800 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.394 [2024-11-20 14:18:50.144885] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:11.394 [2024-11-20 14:18:50.144915] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '2e3fc039-9307-4aee-bd36-1719a4dd231e' was resized: old size 131072, new size 204800 00:08:11.394 [2024-11-20 14:18:50.144955] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:08:11.394 [2024-11-20 14:18:50.249122] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.394 [2024-11-20 14:18:50.300898] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:11.394 [2024-11-20 14:18:50.301011] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:11.394 [2024-11-20 14:18:50.301038] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:11.394 [2024-11-20 14:18:50.301060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:11.394 [2024-11-20 14:18:50.301210] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.394 [2024-11-20 14:18:50.301263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:11.394 [2024-11-20 14:18:50.301284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.394 [2024-11-20 14:18:50.308738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:11.394 [2024-11-20 14:18:50.308803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.394 [2024-11-20 14:18:50.308833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:11.394 [2024-11-20 14:18:50.308850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.394 [2024-11-20 14:18:50.311712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.394 [2024-11-20 14:18:50.311909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:11.394 pt0 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.394 [2024-11-20 14:18:50.314347] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 50e31570-3127-4c19-b31d-85d5d75d4f1b 00:08:11.394 [2024-11-20 14:18:50.314445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 50e31570-3127-4c19-b31d-85d5d75d4f1b is claimed 00:08:11.394 [2024-11-20 14:18:50.314592] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 2e3fc039-9307-4aee-bd36-1719a4dd231e 00:08:11.394 [2024-11-20 14:18:50.314625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2e3fc039-9307-4aee-bd36-1719a4dd231e is claimed 00:08:11.394 [2024-11-20 14:18:50.314803] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 2e3fc039-9307-4aee-bd36-1719a4dd231e (2) smaller than existing raid bdev Raid (3) 00:08:11.394 [2024-11-20 14:18:50.314839] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 50e31570-3127-4c19-b31d-85d5d75d4f1b: File exists 00:08:11.394 [2024-11-20 14:18:50.314906] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:11.394 [2024-11-20 14:18:50.314929] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:08:11.394 [2024-11-20 14:18:50.315457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:11.394 [2024-11-20 14:18:50.315822] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:11.394 [2024-11-20 14:18:50.315963] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:11.394 [2024-11-20 14:18:50.316417] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.394 [2024-11-20 14:18:50.329099] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.394 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.653 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:11.653 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:11.653 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:08:11.653 14:18:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60074 00:08:11.653 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60074 ']' 00:08:11.653 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60074 00:08:11.653 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:11.653 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.653 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60074 00:08:11.653 killing process with pid 60074 00:08:11.653 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.653 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.653 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60074' 00:08:11.653 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60074 00:08:11.653 [2024-11-20 14:18:50.408442] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:11.653 14:18:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60074 00:08:11.653 [2024-11-20 14:18:50.408546] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.653 [2024-11-20 14:18:50.408611] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:11.653 [2024-11-20 14:18:50.408625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:13.027 [2024-11-20 14:18:51.695451] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:13.957 14:18:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:13.957 00:08:13.957 real 0m4.586s 00:08:13.957 user 0m4.898s 00:08:13.957 sys 0m0.643s 00:08:13.957 14:18:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.957 ************************************ 00:08:13.957 END TEST raid0_resize_superblock_test 00:08:13.957 14:18:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.957 ************************************ 00:08:13.957 14:18:52 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:08:13.957 14:18:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:13.957 14:18:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.957 14:18:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:13.957 ************************************ 00:08:13.957 START TEST raid1_resize_superblock_test 00:08:13.957 ************************************ 00:08:13.957 14:18:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:08:13.957 14:18:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:08:13.957 Process raid pid: 60172 00:08:13.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.957 14:18:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60172 00:08:13.957 14:18:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60172' 00:08:13.957 14:18:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60172 00:08:13.957 14:18:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:13.957 14:18:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60172 ']' 00:08:13.957 14:18:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.957 14:18:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.957 14:18:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.957 14:18:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.957 14:18:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.957 [2024-11-20 14:18:52.898795] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:08:13.957 [2024-11-20 14:18:52.899273] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.214 [2024-11-20 14:18:53.073588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.472 [2024-11-20 14:18:53.203740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.472 [2024-11-20 14:18:53.411891] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.472 [2024-11-20 14:18:53.412175] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.039 14:18:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.039 14:18:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:15.039 14:18:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:15.039 14:18:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.039 14:18:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.606 malloc0 00:08:15.606 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.606 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:15.606 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.606 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.606 [2024-11-20 14:18:54.476308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:15.606 [2024-11-20 14:18:54.476394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.606 [2024-11-20 14:18:54.476423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:15.606 [2024-11-20 14:18:54.476441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.606 [2024-11-20 14:18:54.479405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.606 [2024-11-20 14:18:54.479454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:15.606 pt0 00:08:15.606 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.606 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:15.606 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.606 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.866 be893e69-185f-4df5-825d-ab530c762700 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.866 edcbe40f-7d8e-4789-9758-92fd4600f571 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.866 648841bf-4abd-4fbd-a91d-abf6c64d3a56 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.866 [2024-11-20 14:18:54.626946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev edcbe40f-7d8e-4789-9758-92fd4600f571 is claimed 00:08:15.866 [2024-11-20 14:18:54.627086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 648841bf-4abd-4fbd-a91d-abf6c64d3a56 is claimed 00:08:15.866 [2024-11-20 14:18:54.627293] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:15.866 [2024-11-20 14:18:54.627321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:08:15.866 [2024-11-20 14:18:54.627705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:15.866 [2024-11-20 14:18:54.628089] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:15.866 [2024-11-20 14:18:54.628113] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:15.866 [2024-11-20 14:18:54.628306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.866 [2024-11-20 14:18:54.743357] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.866 [2024-11-20 14:18:54.791310] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:15.866 [2024-11-20 14:18:54.791379] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'edcbe40f-7d8e-4789-9758-92fd4600f571' was resized: old size 131072, new size 204800 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.866 [2024-11-20 14:18:54.799135] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:15.866 [2024-11-20 14:18:54.799164] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '648841bf-4abd-4fbd-a91d-abf6c64d3a56' was resized: old size 131072, new size 204800 00:08:15.866 [2024-11-20 14:18:54.799202] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.866 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:08:16.126 [2024-11-20 14:18:54.919324] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.126 [2024-11-20 14:18:54.967133] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:16.126 [2024-11-20 14:18:54.967434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:16.126 [2024-11-20 14:18:54.967487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:16.126 [2024-11-20 14:18:54.967720] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.126 [2024-11-20 14:18:54.968026] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.126 [2024-11-20 14:18:54.968136] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.126 [2024-11-20 14:18:54.968159] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.126 [2024-11-20 14:18:54.974996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:16.126 [2024-11-20 14:18:54.975141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.126 [2024-11-20 14:18:54.975171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:16.126 [2024-11-20 14:18:54.975192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.126 [2024-11-20 14:18:54.978335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.126 [2024-11-20 14:18:54.978534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:16.126 pt0 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.126 [2024-11-20 14:18:54.981192] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev edcbe40f-7d8e-4789-9758-92fd4600f571 00:08:16.126 [2024-11-20 14:18:54.981303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev edcbe40f-7d8e-4789-9758-92fd4600f571 is claimed 00:08:16.126 [2024-11-20 14:18:54.981477] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 648841bf-4abd-4fbd-a91d-abf6c64d3a56 00:08:16.126 [2024-11-20 14:18:54.981511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 648841bf-4abd-4fbd-a91d-abf6c64d3a56 is claimed 00:08:16.126 [2024-11-20 14:18:54.981682] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 648841bf-4abd-4fbd-a91d-abf6c64d3a56 (2) smaller than existing raid bdev Raid (3) 00:08:16.126 [2024-11-20 14:18:54.981716] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev edcbe40f-7d8e-4789-9758-92fd4600f571: File exists 00:08:16.126 [2024-11-20 14:18:54.981773] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:16.126 [2024-11-20 14:18:54.981792] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:16.126 [2024-11-20 14:18:54.982154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:16.126 [2024-11-20 14:18:54.982416] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:16.126 [2024-11-20 14:18:54.982439] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:16.126 [2024-11-20 14:18:54.982695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:16.126 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:16.127 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:16.127 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:08:16.127 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.127 14:18:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.127 [2024-11-20 14:18:54.995375] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.127 14:18:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.127 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:16.127 14:18:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:16.127 14:18:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:08:16.127 14:18:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60172 00:08:16.127 14:18:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60172 ']' 00:08:16.127 14:18:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60172 00:08:16.127 14:18:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:16.127 14:18:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.127 14:18:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60172 00:08:16.127 killing process with pid 60172 00:08:16.127 14:18:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:16.127 14:18:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:16.127 14:18:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60172' 00:08:16.127 14:18:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60172 00:08:16.127 [2024-11-20 14:18:55.074807] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:16.127 14:18:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60172 00:08:16.127 [2024-11-20 14:18:55.074922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.127 [2024-11-20 14:18:55.075058] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.127 [2024-11-20 14:18:55.075076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:17.504 [2024-11-20 14:18:56.391742] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:18.881 ************************************ 00:08:18.881 END TEST raid1_resize_superblock_test 00:08:18.881 ************************************ 00:08:18.881 14:18:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:18.881 00:08:18.881 real 0m4.663s 00:08:18.881 user 0m4.997s 00:08:18.881 sys 0m0.626s 00:08:18.881 14:18:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.881 14:18:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.881 14:18:57 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:08:18.881 14:18:57 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:08:18.881 14:18:57 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:08:18.881 14:18:57 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:08:18.881 14:18:57 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:08:18.881 14:18:57 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:08:18.881 14:18:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:18.881 14:18:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.881 14:18:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:18.881 ************************************ 00:08:18.881 START TEST raid_function_test_raid0 00:08:18.881 ************************************ 00:08:18.881 14:18:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:08:18.881 14:18:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:08:18.881 14:18:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:18.881 14:18:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:18.881 Process raid pid: 60275 00:08:18.881 14:18:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60275 00:08:18.881 14:18:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60275' 00:08:18.881 14:18:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60275 00:08:18.881 14:18:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:18.881 14:18:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60275 ']' 00:08:18.881 14:18:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.881 14:18:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.881 14:18:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.881 14:18:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.881 14:18:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:18.881 [2024-11-20 14:18:57.679523] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:08:18.881 [2024-11-20 14:18:57.679774] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.141 [2024-11-20 14:18:57.879671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.141 [2024-11-20 14:18:58.005427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.400 [2024-11-20 14:18:58.219885] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.400 [2024-11-20 14:18:58.220195] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.659 14:18:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.659 14:18:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:08:19.659 14:18:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:19.659 14:18:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.659 14:18:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:19.659 Base_1 00:08:19.659 14:18:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.659 14:18:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:19.659 14:18:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.659 14:18:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:19.918 Base_2 00:08:19.918 14:18:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.918 14:18:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:08:19.918 14:18:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.918 14:18:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:19.918 [2024-11-20 14:18:58.680065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:19.918 [2024-11-20 14:18:58.682738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:19.918 [2024-11-20 14:18:58.682824] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:19.918 [2024-11-20 14:18:58.682843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:19.918 [2024-11-20 14:18:58.683219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:19.918 [2024-11-20 14:18:58.683414] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:19.918 [2024-11-20 14:18:58.683429] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:19.918 [2024-11-20 14:18:58.683630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.918 14:18:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.918 14:18:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:19.918 14:18:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:19.918 14:18:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.918 14:18:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:19.918 14:18:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.918 14:18:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:19.918 14:18:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:19.918 14:18:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:19.918 14:18:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:19.918 14:18:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:19.918 14:18:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:19.918 14:18:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:19.918 14:18:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:19.918 14:18:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:08:19.918 14:18:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:19.918 14:18:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:19.918 14:18:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:20.177 [2024-11-20 14:18:59.020217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:20.177 /dev/nbd0 00:08:20.177 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:20.177 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:20.177 14:18:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:20.177 14:18:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:08:20.177 14:18:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:20.177 14:18:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:20.177 14:18:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:20.177 14:18:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:08:20.177 14:18:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:20.177 14:18:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:20.178 14:18:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:20.178 1+0 records in 00:08:20.178 1+0 records out 00:08:20.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317531 s, 12.9 MB/s 00:08:20.178 14:18:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:20.178 14:18:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:08:20.178 14:18:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:20.178 14:18:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:20.178 14:18:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:08:20.178 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:20.178 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:20.178 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:20.178 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:20.178 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:20.435 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:20.435 { 00:08:20.435 "nbd_device": "/dev/nbd0", 00:08:20.435 "bdev_name": "raid" 00:08:20.435 } 00:08:20.435 ]' 00:08:20.435 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:20.435 { 00:08:20.435 "nbd_device": "/dev/nbd0", 00:08:20.435 "bdev_name": "raid" 00:08:20.435 } 00:08:20.435 ]' 00:08:20.435 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:20.694 4096+0 records in 00:08:20.694 4096+0 records out 00:08:20.694 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0272371 s, 77.0 MB/s 00:08:20.694 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:20.954 4096+0 records in 00:08:20.954 4096+0 records out 00:08:20.954 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.306636 s, 6.8 MB/s 00:08:20.954 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:20.954 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:20.954 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:20.954 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:20.954 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:20.954 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:20.954 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:20.954 128+0 records in 00:08:20.954 128+0 records out 00:08:20.954 65536 bytes (66 kB, 64 KiB) copied, 0.00108533 s, 60.4 MB/s 00:08:20.954 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:20.954 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:20.954 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:20.954 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:20.954 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:20.955 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:20.955 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:20.955 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:20.955 2035+0 records in 00:08:20.955 2035+0 records out 00:08:20.955 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0123368 s, 84.5 MB/s 00:08:20.955 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:20.955 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:20.955 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:20.955 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:20.955 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:20.955 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:20.955 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:20.955 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:20.955 456+0 records in 00:08:20.955 456+0 records out 00:08:20.955 233472 bytes (233 kB, 228 KiB) copied, 0.00190888 s, 122 MB/s 00:08:20.955 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:20.955 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:20.955 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:20.955 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:20.955 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:20.955 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:08:20.955 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:20.955 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:20.955 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:20.955 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:20.955 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:08:20.955 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:20.955 14:18:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:21.214 14:19:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:21.214 [2024-11-20 14:19:00.160317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.214 14:19:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:21.214 14:19:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:21.214 14:19:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:21.214 14:19:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:21.215 14:19:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:21.215 14:19:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:08:21.215 14:19:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:08:21.215 14:19:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:21.215 14:19:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:21.215 14:19:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:21.474 14:19:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:21.474 14:19:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:21.474 14:19:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:21.733 14:19:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:21.733 14:19:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:21.733 14:19:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:21.733 14:19:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:08:21.733 14:19:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:08:21.733 14:19:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:21.733 14:19:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:08:21.733 14:19:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:21.733 14:19:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60275 00:08:21.733 14:19:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60275 ']' 00:08:21.733 14:19:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60275 00:08:21.733 14:19:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:08:21.733 14:19:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.733 14:19:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60275 00:08:21.733 killing process with pid 60275 00:08:21.733 14:19:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:21.733 14:19:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:21.733 14:19:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60275' 00:08:21.733 14:19:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60275 00:08:21.733 [2024-11-20 14:19:00.510481] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:21.733 14:19:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60275 00:08:21.733 [2024-11-20 14:19:00.510595] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.733 [2024-11-20 14:19:00.510657] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.733 [2024-11-20 14:19:00.510695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:21.733 [2024-11-20 14:19:00.693345] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:23.161 ************************************ 00:08:23.161 END TEST raid_function_test_raid0 00:08:23.161 ************************************ 00:08:23.161 14:19:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:08:23.161 00:08:23.161 real 0m4.210s 00:08:23.161 user 0m5.090s 00:08:23.161 sys 0m1.027s 00:08:23.161 14:19:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.161 14:19:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:23.161 14:19:01 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:08:23.161 14:19:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:23.161 14:19:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.161 14:19:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:23.161 ************************************ 00:08:23.161 START TEST raid_function_test_concat 00:08:23.161 ************************************ 00:08:23.161 14:19:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:08:23.161 14:19:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:08:23.161 14:19:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:23.161 14:19:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:23.161 14:19:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60404 00:08:23.161 Process raid pid: 60404 00:08:23.161 14:19:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:23.161 14:19:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60404' 00:08:23.161 14:19:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60404 00:08:23.161 14:19:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60404 ']' 00:08:23.161 14:19:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.161 14:19:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.161 14:19:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.161 14:19:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.161 14:19:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:23.161 [2024-11-20 14:19:01.903751] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:08:23.161 [2024-11-20 14:19:01.903976] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.161 [2024-11-20 14:19:02.090545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.420 [2024-11-20 14:19:02.227975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.678 [2024-11-20 14:19:02.436075] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.678 [2024-11-20 14:19:02.436132] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.937 14:19:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.937 14:19:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:08:23.937 14:19:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:23.937 14:19:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.937 14:19:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:23.937 Base_1 00:08:23.937 14:19:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.937 14:19:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:23.937 14:19:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.937 14:19:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:24.196 Base_2 00:08:24.196 14:19:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.196 14:19:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:08:24.196 14:19:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.196 14:19:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:24.196 [2024-11-20 14:19:02.931512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:24.196 [2024-11-20 14:19:02.933877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:24.196 [2024-11-20 14:19:02.933974] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:24.196 [2024-11-20 14:19:02.934010] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:24.196 [2024-11-20 14:19:02.934323] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:24.196 [2024-11-20 14:19:02.934537] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:24.196 [2024-11-20 14:19:02.934562] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:24.196 [2024-11-20 14:19:02.934740] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.196 14:19:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.196 14:19:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:24.196 14:19:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:24.196 14:19:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.196 14:19:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:24.196 14:19:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.196 14:19:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:24.196 14:19:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:24.196 14:19:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:24.196 14:19:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:24.196 14:19:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:24.196 14:19:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:24.196 14:19:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:24.196 14:19:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:24.196 14:19:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:08:24.196 14:19:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:24.196 14:19:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:24.196 14:19:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:24.454 [2024-11-20 14:19:03.235677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:24.454 /dev/nbd0 00:08:24.455 14:19:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:24.455 14:19:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:24.455 14:19:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:24.455 14:19:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:08:24.455 14:19:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:24.455 14:19:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:24.455 14:19:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:24.455 14:19:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:08:24.455 14:19:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:24.455 14:19:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:24.455 14:19:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:24.455 1+0 records in 00:08:24.455 1+0 records out 00:08:24.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335253 s, 12.2 MB/s 00:08:24.455 14:19:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.455 14:19:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:08:24.455 14:19:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.455 14:19:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:24.455 14:19:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:08:24.455 14:19:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:24.455 14:19:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:24.455 14:19:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:24.455 14:19:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:24.455 14:19:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:24.713 14:19:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:24.713 { 00:08:24.713 "nbd_device": "/dev/nbd0", 00:08:24.713 "bdev_name": "raid" 00:08:24.713 } 00:08:24.713 ]' 00:08:24.713 14:19:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:24.713 { 00:08:24.713 "nbd_device": "/dev/nbd0", 00:08:24.713 "bdev_name": "raid" 00:08:24.713 } 00:08:24.713 ]' 00:08:24.713 14:19:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:24.713 14:19:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:24.713 14:19:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:24.713 14:19:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:24.713 14:19:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:08:24.713 14:19:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:08:24.713 14:19:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:08:24.713 14:19:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:24.713 14:19:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:24.713 14:19:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:24.713 14:19:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:24.713 14:19:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:24.971 14:19:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:24.971 14:19:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:24.972 14:19:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:24.972 14:19:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:24.972 14:19:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:24.972 14:19:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:24.972 14:19:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:24.972 14:19:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:24.972 14:19:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:24.972 14:19:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:24.972 14:19:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:24.972 14:19:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:24.972 14:19:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:24.972 4096+0 records in 00:08:24.972 4096+0 records out 00:08:24.972 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0241525 s, 86.8 MB/s 00:08:24.972 14:19:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:25.231 4096+0 records in 00:08:25.231 4096+0 records out 00:08:25.231 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.318967 s, 6.6 MB/s 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:25.231 128+0 records in 00:08:25.231 128+0 records out 00:08:25.231 65536 bytes (66 kB, 64 KiB) copied, 0.00103845 s, 63.1 MB/s 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:25.231 2035+0 records in 00:08:25.231 2035+0 records out 00:08:25.231 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0102387 s, 102 MB/s 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:25.231 456+0 records in 00:08:25.231 456+0 records out 00:08:25.231 233472 bytes (233 kB, 228 KiB) copied, 0.00240403 s, 97.1 MB/s 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.231 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:25.490 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:25.490 [2024-11-20 14:19:04.459895] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.490 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:25.490 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:25.490 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.490 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.490 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:25.490 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:08:25.490 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.490 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:25.749 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:25.749 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:26.009 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:26.009 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:26.009 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:26.009 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:26.009 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:26.009 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:26.009 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:08:26.009 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:08:26.009 14:19:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:26.009 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:08:26.009 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:26.009 14:19:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60404 00:08:26.009 14:19:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60404 ']' 00:08:26.009 14:19:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60404 00:08:26.009 14:19:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:08:26.009 14:19:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.009 14:19:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60404 00:08:26.009 14:19:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:26.009 killing process with pid 60404 00:08:26.009 14:19:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:26.009 14:19:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60404' 00:08:26.009 14:19:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60404 00:08:26.009 [2024-11-20 14:19:04.857623] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:26.009 14:19:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60404 00:08:26.009 [2024-11-20 14:19:04.857757] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.009 [2024-11-20 14:19:04.857830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:26.009 [2024-11-20 14:19:04.857849] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:26.268 [2024-11-20 14:19:05.042701] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:27.205 14:19:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:08:27.205 00:08:27.205 real 0m4.356s 00:08:27.205 user 0m5.312s 00:08:27.205 sys 0m0.998s 00:08:27.205 14:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.205 ************************************ 00:08:27.205 END TEST raid_function_test_concat 00:08:27.205 ************************************ 00:08:27.205 14:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:27.465 14:19:06 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:08:27.465 14:19:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:27.465 14:19:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.465 14:19:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:27.465 ************************************ 00:08:27.465 START TEST raid0_resize_test 00:08:27.465 ************************************ 00:08:27.465 14:19:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:08:27.465 14:19:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:08:27.465 14:19:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:27.465 14:19:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:27.465 14:19:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:27.465 14:19:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:27.465 14:19:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:27.465 14:19:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:27.465 14:19:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:27.465 14:19:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60533 00:08:27.465 14:19:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60533' 00:08:27.465 Process raid pid: 60533 00:08:27.465 14:19:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60533 00:08:27.465 14:19:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60533 ']' 00:08:27.465 14:19:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.465 14:19:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.465 14:19:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.465 14:19:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:27.465 14:19:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.465 14:19:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.465 [2024-11-20 14:19:06.312496] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:08:27.465 [2024-11-20 14:19:06.312692] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.724 [2024-11-20 14:19:06.506628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.724 [2024-11-20 14:19:06.664229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.983 [2024-11-20 14:19:06.891467] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.983 [2024-11-20 14:19:06.891527] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.550 Base_1 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.550 Base_2 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.550 [2024-11-20 14:19:07.315527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:28.550 [2024-11-20 14:19:07.317918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:28.550 [2024-11-20 14:19:07.318015] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:28.550 [2024-11-20 14:19:07.318037] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:28.550 [2024-11-20 14:19:07.318351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:28.550 [2024-11-20 14:19:07.318519] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:28.550 [2024-11-20 14:19:07.318544] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:28.550 [2024-11-20 14:19:07.318719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.550 [2024-11-20 14:19:07.323515] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:28.550 [2024-11-20 14:19:07.323555] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:28.550 true 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.550 [2024-11-20 14:19:07.335828] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.550 [2024-11-20 14:19:07.391633] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:28.550 [2024-11-20 14:19:07.391684] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:28.550 [2024-11-20 14:19:07.391742] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:08:28.550 true 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.550 [2024-11-20 14:19:07.403748] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60533 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60533 ']' 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60533 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:08:28.550 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.551 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60533 00:08:28.551 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:28.551 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:28.551 killing process with pid 60533 00:08:28.551 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60533' 00:08:28.551 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60533 00:08:28.551 [2024-11-20 14:19:07.478527] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:28.551 14:19:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60533 00:08:28.551 [2024-11-20 14:19:07.478652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.551 [2024-11-20 14:19:07.478723] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.551 [2024-11-20 14:19:07.478740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:28.551 [2024-11-20 14:19:07.494281] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:29.929 14:19:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:29.929 00:08:29.929 real 0m2.333s 00:08:29.929 user 0m2.598s 00:08:29.929 sys 0m0.367s 00:08:29.929 14:19:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.929 14:19:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.929 ************************************ 00:08:29.929 END TEST raid0_resize_test 00:08:29.929 ************************************ 00:08:29.929 14:19:08 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:08:29.929 14:19:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:29.929 14:19:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.929 14:19:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:29.929 ************************************ 00:08:29.929 START TEST raid1_resize_test 00:08:29.929 ************************************ 00:08:29.929 14:19:08 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:08:29.929 14:19:08 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:08:29.929 14:19:08 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:29.929 14:19:08 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:29.929 14:19:08 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:29.929 14:19:08 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:29.929 14:19:08 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:29.929 14:19:08 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:29.929 14:19:08 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:29.929 14:19:08 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60600 00:08:29.929 Process raid pid: 60600 00:08:29.929 14:19:08 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60600' 00:08:29.929 14:19:08 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:29.929 14:19:08 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60600 00:08:29.929 14:19:08 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60600 ']' 00:08:29.929 14:19:08 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.929 14:19:08 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.929 14:19:08 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.929 14:19:08 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.929 14:19:08 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.929 [2024-11-20 14:19:08.678433] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:08:29.929 [2024-11-20 14:19:08.678589] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.930 [2024-11-20 14:19:08.859358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.188 [2024-11-20 14:19:09.016908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.464 [2024-11-20 14:19:09.235894] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.464 [2024-11-20 14:19:09.235968] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.723 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.723 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:08:30.723 14:19:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:30.723 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.723 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.723 Base_1 00:08:30.723 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.723 14:19:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:30.723 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.723 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.723 Base_2 00:08:30.723 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.723 14:19:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:08:30.723 14:19:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:30.723 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.723 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.723 [2024-11-20 14:19:09.674119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:30.723 [2024-11-20 14:19:09.676552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:30.723 [2024-11-20 14:19:09.676639] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:30.723 [2024-11-20 14:19:09.676660] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:30.723 [2024-11-20 14:19:09.676973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:30.723 [2024-11-20 14:19:09.677154] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:30.723 [2024-11-20 14:19:09.677170] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:30.723 [2024-11-20 14:19:09.677350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.723 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.723 14:19:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:30.723 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.723 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.723 [2024-11-20 14:19:09.682116] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:30.723 [2024-11-20 14:19:09.682161] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:30.723 true 00:08:30.723 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.723 14:19:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:30.723 14:19:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:30.723 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.723 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.723 [2024-11-20 14:19:09.694301] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.017 [2024-11-20 14:19:09.738122] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:31.017 [2024-11-20 14:19:09.738155] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:31.017 [2024-11-20 14:19:09.738192] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:08:31.017 true 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:31.017 [2024-11-20 14:19:09.750295] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60600 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60600 ']' 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60600 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60600 00:08:31.017 killing process with pid 60600 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60600' 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60600 00:08:31.017 [2024-11-20 14:19:09.818327] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:31.017 14:19:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60600 00:08:31.017 [2024-11-20 14:19:09.818430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.017 [2024-11-20 14:19:09.819065] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:31.017 [2024-11-20 14:19:09.819238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:31.017 [2024-11-20 14:19:09.834555] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:31.954 ************************************ 00:08:31.954 END TEST raid1_resize_test 00:08:31.954 ************************************ 00:08:31.954 14:19:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:31.954 00:08:31.954 real 0m2.295s 00:08:31.954 user 0m2.513s 00:08:31.954 sys 0m0.371s 00:08:31.954 14:19:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.954 14:19:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.954 14:19:10 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:31.954 14:19:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:31.954 14:19:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:08:31.954 14:19:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:31.954 14:19:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.954 14:19:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:32.213 ************************************ 00:08:32.213 START TEST raid_state_function_test 00:08:32.213 ************************************ 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60657 00:08:32.213 Process raid pid: 60657 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60657' 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60657 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60657 ']' 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.213 14:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.213 [2024-11-20 14:19:11.064241] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:08:32.213 [2024-11-20 14:19:11.064465] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.472 [2024-11-20 14:19:11.261655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.472 [2024-11-20 14:19:11.408378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.731 [2024-11-20 14:19:11.612667] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.731 [2024-11-20 14:19:11.612740] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.300 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.300 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:33.300 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:33.300 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.300 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.300 [2024-11-20 14:19:12.054794] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:33.300 [2024-11-20 14:19:12.054866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:33.300 [2024-11-20 14:19:12.054885] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.300 [2024-11-20 14:19:12.054903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.300 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.300 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:33.300 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.300 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.300 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.300 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.300 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:33.300 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.300 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.300 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.300 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.300 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.300 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.300 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.300 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.300 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.300 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.300 "name": "Existed_Raid", 00:08:33.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.300 "strip_size_kb": 64, 00:08:33.300 "state": "configuring", 00:08:33.300 "raid_level": "raid0", 00:08:33.300 "superblock": false, 00:08:33.300 "num_base_bdevs": 2, 00:08:33.300 "num_base_bdevs_discovered": 0, 00:08:33.300 "num_base_bdevs_operational": 2, 00:08:33.300 "base_bdevs_list": [ 00:08:33.300 { 00:08:33.300 "name": "BaseBdev1", 00:08:33.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.300 "is_configured": false, 00:08:33.300 "data_offset": 0, 00:08:33.300 "data_size": 0 00:08:33.300 }, 00:08:33.300 { 00:08:33.300 "name": "BaseBdev2", 00:08:33.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.300 "is_configured": false, 00:08:33.300 "data_offset": 0, 00:08:33.300 "data_size": 0 00:08:33.300 } 00:08:33.300 ] 00:08:33.300 }' 00:08:33.300 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.300 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.869 [2024-11-20 14:19:12.570960] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:33.869 [2024-11-20 14:19:12.571161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.869 [2024-11-20 14:19:12.578937] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:33.869 [2024-11-20 14:19:12.579002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:33.869 [2024-11-20 14:19:12.579020] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.869 [2024-11-20 14:19:12.579040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.869 [2024-11-20 14:19:12.625606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.869 BaseBdev1 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.869 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.869 [ 00:08:33.869 { 00:08:33.869 "name": "BaseBdev1", 00:08:33.869 "aliases": [ 00:08:33.869 "a5f816da-f90f-4c6e-a32b-f5a1d3a520c4" 00:08:33.869 ], 00:08:33.869 "product_name": "Malloc disk", 00:08:33.869 "block_size": 512, 00:08:33.869 "num_blocks": 65536, 00:08:33.869 "uuid": "a5f816da-f90f-4c6e-a32b-f5a1d3a520c4", 00:08:33.869 "assigned_rate_limits": { 00:08:33.869 "rw_ios_per_sec": 0, 00:08:33.869 "rw_mbytes_per_sec": 0, 00:08:33.869 "r_mbytes_per_sec": 0, 00:08:33.869 "w_mbytes_per_sec": 0 00:08:33.869 }, 00:08:33.869 "claimed": true, 00:08:33.869 "claim_type": "exclusive_write", 00:08:33.869 "zoned": false, 00:08:33.869 "supported_io_types": { 00:08:33.869 "read": true, 00:08:33.869 "write": true, 00:08:33.869 "unmap": true, 00:08:33.869 "flush": true, 00:08:33.869 "reset": true, 00:08:33.869 "nvme_admin": false, 00:08:33.869 "nvme_io": false, 00:08:33.869 "nvme_io_md": false, 00:08:33.869 "write_zeroes": true, 00:08:33.870 "zcopy": true, 00:08:33.870 "get_zone_info": false, 00:08:33.870 "zone_management": false, 00:08:33.870 "zone_append": false, 00:08:33.870 "compare": false, 00:08:33.870 "compare_and_write": false, 00:08:33.870 "abort": true, 00:08:33.870 "seek_hole": false, 00:08:33.870 "seek_data": false, 00:08:33.870 "copy": true, 00:08:33.870 "nvme_iov_md": false 00:08:33.870 }, 00:08:33.870 "memory_domains": [ 00:08:33.870 { 00:08:33.870 "dma_device_id": "system", 00:08:33.870 "dma_device_type": 1 00:08:33.870 }, 00:08:33.870 { 00:08:33.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.870 "dma_device_type": 2 00:08:33.870 } 00:08:33.870 ], 00:08:33.870 "driver_specific": {} 00:08:33.870 } 00:08:33.870 ] 00:08:33.870 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.870 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:33.870 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:33.870 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.870 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.870 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.870 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.870 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:33.870 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.870 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.870 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.870 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.870 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.870 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.870 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.870 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.870 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.870 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.870 "name": "Existed_Raid", 00:08:33.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.870 "strip_size_kb": 64, 00:08:33.870 "state": "configuring", 00:08:33.870 "raid_level": "raid0", 00:08:33.870 "superblock": false, 00:08:33.870 "num_base_bdevs": 2, 00:08:33.870 "num_base_bdevs_discovered": 1, 00:08:33.870 "num_base_bdevs_operational": 2, 00:08:33.870 "base_bdevs_list": [ 00:08:33.870 { 00:08:33.870 "name": "BaseBdev1", 00:08:33.870 "uuid": "a5f816da-f90f-4c6e-a32b-f5a1d3a520c4", 00:08:33.870 "is_configured": true, 00:08:33.870 "data_offset": 0, 00:08:33.870 "data_size": 65536 00:08:33.870 }, 00:08:33.870 { 00:08:33.870 "name": "BaseBdev2", 00:08:33.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.870 "is_configured": false, 00:08:33.870 "data_offset": 0, 00:08:33.870 "data_size": 0 00:08:33.870 } 00:08:33.870 ] 00:08:33.870 }' 00:08:33.870 14:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.870 14:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.438 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:34.438 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.438 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.438 [2024-11-20 14:19:13.153857] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:34.438 [2024-11-20 14:19:13.153918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:34.438 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.438 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:34.438 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.438 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.438 [2024-11-20 14:19:13.161888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:34.438 [2024-11-20 14:19:13.164548] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.438 [2024-11-20 14:19:13.164771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.438 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.438 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:34.438 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:34.438 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:34.438 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.438 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.438 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.438 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.438 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.438 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.438 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.439 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.439 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.439 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.439 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.439 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.439 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.439 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.439 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.439 "name": "Existed_Raid", 00:08:34.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.439 "strip_size_kb": 64, 00:08:34.439 "state": "configuring", 00:08:34.439 "raid_level": "raid0", 00:08:34.439 "superblock": false, 00:08:34.439 "num_base_bdevs": 2, 00:08:34.439 "num_base_bdevs_discovered": 1, 00:08:34.439 "num_base_bdevs_operational": 2, 00:08:34.439 "base_bdevs_list": [ 00:08:34.439 { 00:08:34.439 "name": "BaseBdev1", 00:08:34.439 "uuid": "a5f816da-f90f-4c6e-a32b-f5a1d3a520c4", 00:08:34.439 "is_configured": true, 00:08:34.439 "data_offset": 0, 00:08:34.439 "data_size": 65536 00:08:34.439 }, 00:08:34.439 { 00:08:34.439 "name": "BaseBdev2", 00:08:34.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.439 "is_configured": false, 00:08:34.439 "data_offset": 0, 00:08:34.439 "data_size": 0 00:08:34.439 } 00:08:34.439 ] 00:08:34.439 }' 00:08:34.439 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.439 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.698 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:34.698 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.698 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.958 [2024-11-20 14:19:13.701424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:34.958 [2024-11-20 14:19:13.701484] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:34.958 [2024-11-20 14:19:13.701498] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:34.958 [2024-11-20 14:19:13.701838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:34.958 [2024-11-20 14:19:13.702114] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:34.958 [2024-11-20 14:19:13.702137] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:34.958 BaseBdev2 00:08:34.958 [2024-11-20 14:19:13.702464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.958 [ 00:08:34.958 { 00:08:34.958 "name": "BaseBdev2", 00:08:34.958 "aliases": [ 00:08:34.958 "c60ab154-6f15-40c4-bb19-2c64d3a58611" 00:08:34.958 ], 00:08:34.958 "product_name": "Malloc disk", 00:08:34.958 "block_size": 512, 00:08:34.958 "num_blocks": 65536, 00:08:34.958 "uuid": "c60ab154-6f15-40c4-bb19-2c64d3a58611", 00:08:34.958 "assigned_rate_limits": { 00:08:34.958 "rw_ios_per_sec": 0, 00:08:34.958 "rw_mbytes_per_sec": 0, 00:08:34.958 "r_mbytes_per_sec": 0, 00:08:34.958 "w_mbytes_per_sec": 0 00:08:34.958 }, 00:08:34.958 "claimed": true, 00:08:34.958 "claim_type": "exclusive_write", 00:08:34.958 "zoned": false, 00:08:34.958 "supported_io_types": { 00:08:34.958 "read": true, 00:08:34.958 "write": true, 00:08:34.958 "unmap": true, 00:08:34.958 "flush": true, 00:08:34.958 "reset": true, 00:08:34.958 "nvme_admin": false, 00:08:34.958 "nvme_io": false, 00:08:34.958 "nvme_io_md": false, 00:08:34.958 "write_zeroes": true, 00:08:34.958 "zcopy": true, 00:08:34.958 "get_zone_info": false, 00:08:34.958 "zone_management": false, 00:08:34.958 "zone_append": false, 00:08:34.958 "compare": false, 00:08:34.958 "compare_and_write": false, 00:08:34.958 "abort": true, 00:08:34.958 "seek_hole": false, 00:08:34.958 "seek_data": false, 00:08:34.958 "copy": true, 00:08:34.958 "nvme_iov_md": false 00:08:34.958 }, 00:08:34.958 "memory_domains": [ 00:08:34.958 { 00:08:34.958 "dma_device_id": "system", 00:08:34.958 "dma_device_type": 1 00:08:34.958 }, 00:08:34.958 { 00:08:34.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.958 "dma_device_type": 2 00:08:34.958 } 00:08:34.958 ], 00:08:34.958 "driver_specific": {} 00:08:34.958 } 00:08:34.958 ] 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.958 "name": "Existed_Raid", 00:08:34.958 "uuid": "8fed9e17-db37-4198-b71b-a14588b19d4d", 00:08:34.958 "strip_size_kb": 64, 00:08:34.958 "state": "online", 00:08:34.958 "raid_level": "raid0", 00:08:34.958 "superblock": false, 00:08:34.958 "num_base_bdevs": 2, 00:08:34.958 "num_base_bdevs_discovered": 2, 00:08:34.958 "num_base_bdevs_operational": 2, 00:08:34.958 "base_bdevs_list": [ 00:08:34.958 { 00:08:34.958 "name": "BaseBdev1", 00:08:34.958 "uuid": "a5f816da-f90f-4c6e-a32b-f5a1d3a520c4", 00:08:34.958 "is_configured": true, 00:08:34.958 "data_offset": 0, 00:08:34.958 "data_size": 65536 00:08:34.958 }, 00:08:34.958 { 00:08:34.958 "name": "BaseBdev2", 00:08:34.958 "uuid": "c60ab154-6f15-40c4-bb19-2c64d3a58611", 00:08:34.958 "is_configured": true, 00:08:34.958 "data_offset": 0, 00:08:34.958 "data_size": 65536 00:08:34.958 } 00:08:34.958 ] 00:08:34.958 }' 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.958 14:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.528 [2024-11-20 14:19:14.246006] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:35.528 "name": "Existed_Raid", 00:08:35.528 "aliases": [ 00:08:35.528 "8fed9e17-db37-4198-b71b-a14588b19d4d" 00:08:35.528 ], 00:08:35.528 "product_name": "Raid Volume", 00:08:35.528 "block_size": 512, 00:08:35.528 "num_blocks": 131072, 00:08:35.528 "uuid": "8fed9e17-db37-4198-b71b-a14588b19d4d", 00:08:35.528 "assigned_rate_limits": { 00:08:35.528 "rw_ios_per_sec": 0, 00:08:35.528 "rw_mbytes_per_sec": 0, 00:08:35.528 "r_mbytes_per_sec": 0, 00:08:35.528 "w_mbytes_per_sec": 0 00:08:35.528 }, 00:08:35.528 "claimed": false, 00:08:35.528 "zoned": false, 00:08:35.528 "supported_io_types": { 00:08:35.528 "read": true, 00:08:35.528 "write": true, 00:08:35.528 "unmap": true, 00:08:35.528 "flush": true, 00:08:35.528 "reset": true, 00:08:35.528 "nvme_admin": false, 00:08:35.528 "nvme_io": false, 00:08:35.528 "nvme_io_md": false, 00:08:35.528 "write_zeroes": true, 00:08:35.528 "zcopy": false, 00:08:35.528 "get_zone_info": false, 00:08:35.528 "zone_management": false, 00:08:35.528 "zone_append": false, 00:08:35.528 "compare": false, 00:08:35.528 "compare_and_write": false, 00:08:35.528 "abort": false, 00:08:35.528 "seek_hole": false, 00:08:35.528 "seek_data": false, 00:08:35.528 "copy": false, 00:08:35.528 "nvme_iov_md": false 00:08:35.528 }, 00:08:35.528 "memory_domains": [ 00:08:35.528 { 00:08:35.528 "dma_device_id": "system", 00:08:35.528 "dma_device_type": 1 00:08:35.528 }, 00:08:35.528 { 00:08:35.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.528 "dma_device_type": 2 00:08:35.528 }, 00:08:35.528 { 00:08:35.528 "dma_device_id": "system", 00:08:35.528 "dma_device_type": 1 00:08:35.528 }, 00:08:35.528 { 00:08:35.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.528 "dma_device_type": 2 00:08:35.528 } 00:08:35.528 ], 00:08:35.528 "driver_specific": { 00:08:35.528 "raid": { 00:08:35.528 "uuid": "8fed9e17-db37-4198-b71b-a14588b19d4d", 00:08:35.528 "strip_size_kb": 64, 00:08:35.528 "state": "online", 00:08:35.528 "raid_level": "raid0", 00:08:35.528 "superblock": false, 00:08:35.528 "num_base_bdevs": 2, 00:08:35.528 "num_base_bdevs_discovered": 2, 00:08:35.528 "num_base_bdevs_operational": 2, 00:08:35.528 "base_bdevs_list": [ 00:08:35.528 { 00:08:35.528 "name": "BaseBdev1", 00:08:35.528 "uuid": "a5f816da-f90f-4c6e-a32b-f5a1d3a520c4", 00:08:35.528 "is_configured": true, 00:08:35.528 "data_offset": 0, 00:08:35.528 "data_size": 65536 00:08:35.528 }, 00:08:35.528 { 00:08:35.528 "name": "BaseBdev2", 00:08:35.528 "uuid": "c60ab154-6f15-40c4-bb19-2c64d3a58611", 00:08:35.528 "is_configured": true, 00:08:35.528 "data_offset": 0, 00:08:35.528 "data_size": 65536 00:08:35.528 } 00:08:35.528 ] 00:08:35.528 } 00:08:35.528 } 00:08:35.528 }' 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:35.528 BaseBdev2' 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.528 14:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.528 [2024-11-20 14:19:14.501734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:35.528 [2024-11-20 14:19:14.501783] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:35.528 [2024-11-20 14:19:14.501851] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.788 14:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.788 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:35.788 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:35.788 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:35.788 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:35.788 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:35.788 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:35.788 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.788 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:35.788 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.788 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.788 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:35.788 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.788 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.788 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.788 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.788 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.788 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.788 14:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.788 14:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.788 14:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.788 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.788 "name": "Existed_Raid", 00:08:35.788 "uuid": "8fed9e17-db37-4198-b71b-a14588b19d4d", 00:08:35.788 "strip_size_kb": 64, 00:08:35.788 "state": "offline", 00:08:35.788 "raid_level": "raid0", 00:08:35.788 "superblock": false, 00:08:35.788 "num_base_bdevs": 2, 00:08:35.788 "num_base_bdevs_discovered": 1, 00:08:35.788 "num_base_bdevs_operational": 1, 00:08:35.788 "base_bdevs_list": [ 00:08:35.788 { 00:08:35.788 "name": null, 00:08:35.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.788 "is_configured": false, 00:08:35.788 "data_offset": 0, 00:08:35.788 "data_size": 65536 00:08:35.788 }, 00:08:35.788 { 00:08:35.788 "name": "BaseBdev2", 00:08:35.788 "uuid": "c60ab154-6f15-40c4-bb19-2c64d3a58611", 00:08:35.788 "is_configured": true, 00:08:35.788 "data_offset": 0, 00:08:35.788 "data_size": 65536 00:08:35.788 } 00:08:35.788 ] 00:08:35.788 }' 00:08:35.788 14:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.788 14:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.357 [2024-11-20 14:19:15.137278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:36.357 [2024-11-20 14:19:15.137348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60657 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60657 ']' 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60657 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60657 00:08:36.357 killing process with pid 60657 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60657' 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60657 00:08:36.357 [2024-11-20 14:19:15.318361] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:36.357 14:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60657 00:08:36.357 [2024-11-20 14:19:15.333324] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:37.737 00:08:37.737 real 0m5.451s 00:08:37.737 user 0m8.253s 00:08:37.737 sys 0m0.733s 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.737 ************************************ 00:08:37.737 END TEST raid_state_function_test 00:08:37.737 ************************************ 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.737 14:19:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:08:37.737 14:19:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:37.737 14:19:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.737 14:19:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.737 ************************************ 00:08:37.737 START TEST raid_state_function_test_sb 00:08:37.737 ************************************ 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60910 00:08:37.737 Process raid pid: 60910 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60910' 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60910 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60910 ']' 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.737 14:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.737 [2024-11-20 14:19:16.556590] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:08:37.737 [2024-11-20 14:19:16.556758] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.996 [2024-11-20 14:19:16.733571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.996 [2024-11-20 14:19:16.870145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.255 [2024-11-20 14:19:17.082061] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.255 [2024-11-20 14:19:17.082106] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.514 14:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.514 14:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:38.514 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:38.514 14:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.514 14:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.514 [2024-11-20 14:19:17.464140] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:38.514 [2024-11-20 14:19:17.464208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:38.514 [2024-11-20 14:19:17.464225] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:38.514 [2024-11-20 14:19:17.464242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:38.514 14:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.514 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:38.514 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.514 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.514 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.515 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.515 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:38.515 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.515 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.515 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.515 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.515 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.515 14:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.515 14:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.515 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.515 14:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.773 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.773 "name": "Existed_Raid", 00:08:38.773 "uuid": "0e97d4f6-3103-4eea-a8e3-318d31108c53", 00:08:38.773 "strip_size_kb": 64, 00:08:38.773 "state": "configuring", 00:08:38.773 "raid_level": "raid0", 00:08:38.773 "superblock": true, 00:08:38.773 "num_base_bdevs": 2, 00:08:38.773 "num_base_bdevs_discovered": 0, 00:08:38.773 "num_base_bdevs_operational": 2, 00:08:38.773 "base_bdevs_list": [ 00:08:38.773 { 00:08:38.773 "name": "BaseBdev1", 00:08:38.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.773 "is_configured": false, 00:08:38.773 "data_offset": 0, 00:08:38.773 "data_size": 0 00:08:38.773 }, 00:08:38.773 { 00:08:38.773 "name": "BaseBdev2", 00:08:38.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.773 "is_configured": false, 00:08:38.773 "data_offset": 0, 00:08:38.773 "data_size": 0 00:08:38.773 } 00:08:38.773 ] 00:08:38.773 }' 00:08:38.773 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.773 14:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.032 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:39.032 14:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.032 14:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.032 [2024-11-20 14:19:17.972208] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:39.032 [2024-11-20 14:19:17.972256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:39.032 14:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.032 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:39.032 14:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.032 14:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.032 [2024-11-20 14:19:17.980186] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:39.032 [2024-11-20 14:19:17.980242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:39.032 [2024-11-20 14:19:17.980258] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.032 [2024-11-20 14:19:17.980277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.032 14:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.032 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:39.032 14:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.032 14:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.291 [2024-11-20 14:19:18.025772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.292 BaseBdev1 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.292 [ 00:08:39.292 { 00:08:39.292 "name": "BaseBdev1", 00:08:39.292 "aliases": [ 00:08:39.292 "018405e5-7e91-424d-ab45-47d541699388" 00:08:39.292 ], 00:08:39.292 "product_name": "Malloc disk", 00:08:39.292 "block_size": 512, 00:08:39.292 "num_blocks": 65536, 00:08:39.292 "uuid": "018405e5-7e91-424d-ab45-47d541699388", 00:08:39.292 "assigned_rate_limits": { 00:08:39.292 "rw_ios_per_sec": 0, 00:08:39.292 "rw_mbytes_per_sec": 0, 00:08:39.292 "r_mbytes_per_sec": 0, 00:08:39.292 "w_mbytes_per_sec": 0 00:08:39.292 }, 00:08:39.292 "claimed": true, 00:08:39.292 "claim_type": "exclusive_write", 00:08:39.292 "zoned": false, 00:08:39.292 "supported_io_types": { 00:08:39.292 "read": true, 00:08:39.292 "write": true, 00:08:39.292 "unmap": true, 00:08:39.292 "flush": true, 00:08:39.292 "reset": true, 00:08:39.292 "nvme_admin": false, 00:08:39.292 "nvme_io": false, 00:08:39.292 "nvme_io_md": false, 00:08:39.292 "write_zeroes": true, 00:08:39.292 "zcopy": true, 00:08:39.292 "get_zone_info": false, 00:08:39.292 "zone_management": false, 00:08:39.292 "zone_append": false, 00:08:39.292 "compare": false, 00:08:39.292 "compare_and_write": false, 00:08:39.292 "abort": true, 00:08:39.292 "seek_hole": false, 00:08:39.292 "seek_data": false, 00:08:39.292 "copy": true, 00:08:39.292 "nvme_iov_md": false 00:08:39.292 }, 00:08:39.292 "memory_domains": [ 00:08:39.292 { 00:08:39.292 "dma_device_id": "system", 00:08:39.292 "dma_device_type": 1 00:08:39.292 }, 00:08:39.292 { 00:08:39.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.292 "dma_device_type": 2 00:08:39.292 } 00:08:39.292 ], 00:08:39.292 "driver_specific": {} 00:08:39.292 } 00:08:39.292 ] 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.292 "name": "Existed_Raid", 00:08:39.292 "uuid": "abb5413b-4d10-435e-aada-87a1722c01ba", 00:08:39.292 "strip_size_kb": 64, 00:08:39.292 "state": "configuring", 00:08:39.292 "raid_level": "raid0", 00:08:39.292 "superblock": true, 00:08:39.292 "num_base_bdevs": 2, 00:08:39.292 "num_base_bdevs_discovered": 1, 00:08:39.292 "num_base_bdevs_operational": 2, 00:08:39.292 "base_bdevs_list": [ 00:08:39.292 { 00:08:39.292 "name": "BaseBdev1", 00:08:39.292 "uuid": "018405e5-7e91-424d-ab45-47d541699388", 00:08:39.292 "is_configured": true, 00:08:39.292 "data_offset": 2048, 00:08:39.292 "data_size": 63488 00:08:39.292 }, 00:08:39.292 { 00:08:39.292 "name": "BaseBdev2", 00:08:39.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.292 "is_configured": false, 00:08:39.292 "data_offset": 0, 00:08:39.292 "data_size": 0 00:08:39.292 } 00:08:39.292 ] 00:08:39.292 }' 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.292 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.861 [2024-11-20 14:19:18.574014] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:39.861 [2024-11-20 14:19:18.574074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.861 [2024-11-20 14:19:18.582057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.861 [2024-11-20 14:19:18.584655] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.861 [2024-11-20 14:19:18.584708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.861 "name": "Existed_Raid", 00:08:39.861 "uuid": "41f31a6f-bc05-4c39-9f11-346f9d4343d4", 00:08:39.861 "strip_size_kb": 64, 00:08:39.861 "state": "configuring", 00:08:39.861 "raid_level": "raid0", 00:08:39.861 "superblock": true, 00:08:39.861 "num_base_bdevs": 2, 00:08:39.861 "num_base_bdevs_discovered": 1, 00:08:39.861 "num_base_bdevs_operational": 2, 00:08:39.861 "base_bdevs_list": [ 00:08:39.861 { 00:08:39.861 "name": "BaseBdev1", 00:08:39.861 "uuid": "018405e5-7e91-424d-ab45-47d541699388", 00:08:39.861 "is_configured": true, 00:08:39.861 "data_offset": 2048, 00:08:39.861 "data_size": 63488 00:08:39.861 }, 00:08:39.861 { 00:08:39.861 "name": "BaseBdev2", 00:08:39.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.861 "is_configured": false, 00:08:39.861 "data_offset": 0, 00:08:39.861 "data_size": 0 00:08:39.861 } 00:08:39.861 ] 00:08:39.861 }' 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.861 14:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.121 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:40.121 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.121 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.380 [2024-11-20 14:19:19.125478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.380 [2024-11-20 14:19:19.125777] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:40.380 [2024-11-20 14:19:19.125797] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:40.380 BaseBdev2 00:08:40.380 [2024-11-20 14:19:19.126150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:40.380 [2024-11-20 14:19:19.126358] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:40.380 [2024-11-20 14:19:19.126382] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:40.380 [2024-11-20 14:19:19.126554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.380 [ 00:08:40.380 { 00:08:40.380 "name": "BaseBdev2", 00:08:40.380 "aliases": [ 00:08:40.380 "06d44772-59b1-4a34-a211-3578488f38e0" 00:08:40.380 ], 00:08:40.380 "product_name": "Malloc disk", 00:08:40.380 "block_size": 512, 00:08:40.380 "num_blocks": 65536, 00:08:40.380 "uuid": "06d44772-59b1-4a34-a211-3578488f38e0", 00:08:40.380 "assigned_rate_limits": { 00:08:40.380 "rw_ios_per_sec": 0, 00:08:40.380 "rw_mbytes_per_sec": 0, 00:08:40.380 "r_mbytes_per_sec": 0, 00:08:40.380 "w_mbytes_per_sec": 0 00:08:40.380 }, 00:08:40.380 "claimed": true, 00:08:40.380 "claim_type": "exclusive_write", 00:08:40.380 "zoned": false, 00:08:40.380 "supported_io_types": { 00:08:40.380 "read": true, 00:08:40.380 "write": true, 00:08:40.380 "unmap": true, 00:08:40.380 "flush": true, 00:08:40.380 "reset": true, 00:08:40.380 "nvme_admin": false, 00:08:40.380 "nvme_io": false, 00:08:40.380 "nvme_io_md": false, 00:08:40.380 "write_zeroes": true, 00:08:40.380 "zcopy": true, 00:08:40.380 "get_zone_info": false, 00:08:40.380 "zone_management": false, 00:08:40.380 "zone_append": false, 00:08:40.380 "compare": false, 00:08:40.380 "compare_and_write": false, 00:08:40.380 "abort": true, 00:08:40.380 "seek_hole": false, 00:08:40.380 "seek_data": false, 00:08:40.380 "copy": true, 00:08:40.380 "nvme_iov_md": false 00:08:40.380 }, 00:08:40.380 "memory_domains": [ 00:08:40.380 { 00:08:40.380 "dma_device_id": "system", 00:08:40.380 "dma_device_type": 1 00:08:40.380 }, 00:08:40.380 { 00:08:40.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.380 "dma_device_type": 2 00:08:40.380 } 00:08:40.380 ], 00:08:40.380 "driver_specific": {} 00:08:40.380 } 00:08:40.380 ] 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.380 "name": "Existed_Raid", 00:08:40.380 "uuid": "41f31a6f-bc05-4c39-9f11-346f9d4343d4", 00:08:40.380 "strip_size_kb": 64, 00:08:40.380 "state": "online", 00:08:40.380 "raid_level": "raid0", 00:08:40.380 "superblock": true, 00:08:40.380 "num_base_bdevs": 2, 00:08:40.380 "num_base_bdevs_discovered": 2, 00:08:40.380 "num_base_bdevs_operational": 2, 00:08:40.380 "base_bdevs_list": [ 00:08:40.380 { 00:08:40.380 "name": "BaseBdev1", 00:08:40.380 "uuid": "018405e5-7e91-424d-ab45-47d541699388", 00:08:40.380 "is_configured": true, 00:08:40.380 "data_offset": 2048, 00:08:40.380 "data_size": 63488 00:08:40.380 }, 00:08:40.380 { 00:08:40.380 "name": "BaseBdev2", 00:08:40.380 "uuid": "06d44772-59b1-4a34-a211-3578488f38e0", 00:08:40.380 "is_configured": true, 00:08:40.380 "data_offset": 2048, 00:08:40.380 "data_size": 63488 00:08:40.380 } 00:08:40.380 ] 00:08:40.380 }' 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.380 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.946 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:40.946 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:40.946 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:40.946 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:40.946 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:40.946 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:40.946 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:40.946 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:40.946 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.946 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.946 [2024-11-20 14:19:19.634025] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.946 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.946 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:40.946 "name": "Existed_Raid", 00:08:40.946 "aliases": [ 00:08:40.946 "41f31a6f-bc05-4c39-9f11-346f9d4343d4" 00:08:40.946 ], 00:08:40.946 "product_name": "Raid Volume", 00:08:40.946 "block_size": 512, 00:08:40.946 "num_blocks": 126976, 00:08:40.946 "uuid": "41f31a6f-bc05-4c39-9f11-346f9d4343d4", 00:08:40.946 "assigned_rate_limits": { 00:08:40.946 "rw_ios_per_sec": 0, 00:08:40.946 "rw_mbytes_per_sec": 0, 00:08:40.946 "r_mbytes_per_sec": 0, 00:08:40.946 "w_mbytes_per_sec": 0 00:08:40.946 }, 00:08:40.946 "claimed": false, 00:08:40.946 "zoned": false, 00:08:40.946 "supported_io_types": { 00:08:40.946 "read": true, 00:08:40.946 "write": true, 00:08:40.946 "unmap": true, 00:08:40.946 "flush": true, 00:08:40.946 "reset": true, 00:08:40.946 "nvme_admin": false, 00:08:40.946 "nvme_io": false, 00:08:40.946 "nvme_io_md": false, 00:08:40.947 "write_zeroes": true, 00:08:40.947 "zcopy": false, 00:08:40.947 "get_zone_info": false, 00:08:40.947 "zone_management": false, 00:08:40.947 "zone_append": false, 00:08:40.947 "compare": false, 00:08:40.947 "compare_and_write": false, 00:08:40.947 "abort": false, 00:08:40.947 "seek_hole": false, 00:08:40.947 "seek_data": false, 00:08:40.947 "copy": false, 00:08:40.947 "nvme_iov_md": false 00:08:40.947 }, 00:08:40.947 "memory_domains": [ 00:08:40.947 { 00:08:40.947 "dma_device_id": "system", 00:08:40.947 "dma_device_type": 1 00:08:40.947 }, 00:08:40.947 { 00:08:40.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.947 "dma_device_type": 2 00:08:40.947 }, 00:08:40.947 { 00:08:40.947 "dma_device_id": "system", 00:08:40.947 "dma_device_type": 1 00:08:40.947 }, 00:08:40.947 { 00:08:40.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.947 "dma_device_type": 2 00:08:40.947 } 00:08:40.947 ], 00:08:40.947 "driver_specific": { 00:08:40.947 "raid": { 00:08:40.947 "uuid": "41f31a6f-bc05-4c39-9f11-346f9d4343d4", 00:08:40.947 "strip_size_kb": 64, 00:08:40.947 "state": "online", 00:08:40.947 "raid_level": "raid0", 00:08:40.947 "superblock": true, 00:08:40.947 "num_base_bdevs": 2, 00:08:40.947 "num_base_bdevs_discovered": 2, 00:08:40.947 "num_base_bdevs_operational": 2, 00:08:40.947 "base_bdevs_list": [ 00:08:40.947 { 00:08:40.947 "name": "BaseBdev1", 00:08:40.947 "uuid": "018405e5-7e91-424d-ab45-47d541699388", 00:08:40.947 "is_configured": true, 00:08:40.947 "data_offset": 2048, 00:08:40.947 "data_size": 63488 00:08:40.947 }, 00:08:40.947 { 00:08:40.947 "name": "BaseBdev2", 00:08:40.947 "uuid": "06d44772-59b1-4a34-a211-3578488f38e0", 00:08:40.947 "is_configured": true, 00:08:40.947 "data_offset": 2048, 00:08:40.947 "data_size": 63488 00:08:40.947 } 00:08:40.947 ] 00:08:40.947 } 00:08:40.947 } 00:08:40.947 }' 00:08:40.947 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:40.947 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:40.947 BaseBdev2' 00:08:40.947 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.947 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:40.947 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.947 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:40.947 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.947 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.947 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.947 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.947 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.947 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.947 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.947 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.947 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:40.947 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.947 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.947 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.947 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.947 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.947 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:40.947 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.947 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.947 [2024-11-20 14:19:19.897858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:40.947 [2024-11-20 14:19:19.897906] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:40.947 [2024-11-20 14:19:19.897972] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.205 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.205 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:41.205 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:41.205 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:41.205 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:41.205 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:41.205 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:41.205 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.205 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:41.206 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.206 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.206 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:41.206 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.206 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.206 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.206 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.206 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.206 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.206 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.206 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.206 14:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.206 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.206 "name": "Existed_Raid", 00:08:41.206 "uuid": "41f31a6f-bc05-4c39-9f11-346f9d4343d4", 00:08:41.206 "strip_size_kb": 64, 00:08:41.206 "state": "offline", 00:08:41.206 "raid_level": "raid0", 00:08:41.206 "superblock": true, 00:08:41.206 "num_base_bdevs": 2, 00:08:41.206 "num_base_bdevs_discovered": 1, 00:08:41.206 "num_base_bdevs_operational": 1, 00:08:41.206 "base_bdevs_list": [ 00:08:41.206 { 00:08:41.206 "name": null, 00:08:41.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.206 "is_configured": false, 00:08:41.206 "data_offset": 0, 00:08:41.206 "data_size": 63488 00:08:41.206 }, 00:08:41.206 { 00:08:41.206 "name": "BaseBdev2", 00:08:41.206 "uuid": "06d44772-59b1-4a34-a211-3578488f38e0", 00:08:41.206 "is_configured": true, 00:08:41.206 "data_offset": 2048, 00:08:41.206 "data_size": 63488 00:08:41.206 } 00:08:41.206 ] 00:08:41.206 }' 00:08:41.206 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.206 14:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.773 [2024-11-20 14:19:20.572270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:41.773 [2024-11-20 14:19:20.572381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60910 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60910 ']' 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60910 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60910 00:08:41.773 killing process with pid 60910 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60910' 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60910 00:08:41.773 [2024-11-20 14:19:20.745031] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:41.773 14:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60910 00:08:42.032 [2024-11-20 14:19:20.760235] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:42.967 14:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:42.967 00:08:42.967 real 0m5.418s 00:08:42.967 user 0m8.137s 00:08:42.967 sys 0m0.748s 00:08:42.967 14:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.967 14:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.967 ************************************ 00:08:42.967 END TEST raid_state_function_test_sb 00:08:42.967 ************************************ 00:08:42.967 14:19:21 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:08:42.967 14:19:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:42.967 14:19:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.967 14:19:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:42.967 ************************************ 00:08:42.967 START TEST raid_superblock_test 00:08:42.967 ************************************ 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61168 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61168 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61168 ']' 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.967 14:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.226 [2024-11-20 14:19:22.043044] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:08:43.226 [2024-11-20 14:19:22.043238] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61168 ] 00:08:43.484 [2024-11-20 14:19:22.224632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.484 [2024-11-20 14:19:22.355803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.743 [2024-11-20 14:19:22.571179] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.743 [2024-11-20 14:19:22.571296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.363 malloc1 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.363 [2024-11-20 14:19:23.078668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:44.363 [2024-11-20 14:19:23.078747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.363 [2024-11-20 14:19:23.078785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:44.363 [2024-11-20 14:19:23.078811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.363 [2024-11-20 14:19:23.082199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.363 [2024-11-20 14:19:23.082249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:44.363 pt1 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.363 malloc2 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.363 [2024-11-20 14:19:23.128716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:44.363 [2024-11-20 14:19:23.128790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.363 [2024-11-20 14:19:23.128829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:44.363 [2024-11-20 14:19:23.128844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.363 [2024-11-20 14:19:23.131714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.363 [2024-11-20 14:19:23.131764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:44.363 pt2 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.363 [2024-11-20 14:19:23.136778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:44.363 [2024-11-20 14:19:23.139210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:44.363 [2024-11-20 14:19:23.139428] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:44.363 [2024-11-20 14:19:23.139458] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:44.363 [2024-11-20 14:19:23.139782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:44.363 [2024-11-20 14:19:23.140008] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:44.363 [2024-11-20 14:19:23.140043] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:44.363 [2024-11-20 14:19:23.140233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.363 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.364 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.364 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.364 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.364 "name": "raid_bdev1", 00:08:44.364 "uuid": "d937001b-6f16-4675-a336-110969380ee3", 00:08:44.364 "strip_size_kb": 64, 00:08:44.364 "state": "online", 00:08:44.364 "raid_level": "raid0", 00:08:44.364 "superblock": true, 00:08:44.364 "num_base_bdevs": 2, 00:08:44.364 "num_base_bdevs_discovered": 2, 00:08:44.364 "num_base_bdevs_operational": 2, 00:08:44.364 "base_bdevs_list": [ 00:08:44.364 { 00:08:44.364 "name": "pt1", 00:08:44.364 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.364 "is_configured": true, 00:08:44.364 "data_offset": 2048, 00:08:44.364 "data_size": 63488 00:08:44.364 }, 00:08:44.364 { 00:08:44.364 "name": "pt2", 00:08:44.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.364 "is_configured": true, 00:08:44.364 "data_offset": 2048, 00:08:44.364 "data_size": 63488 00:08:44.364 } 00:08:44.364 ] 00:08:44.364 }' 00:08:44.364 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.364 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.636 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:44.636 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:44.636 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:44.636 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:44.636 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:44.636 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:44.636 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:44.636 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:44.636 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.636 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.636 [2024-11-20 14:19:23.597224] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.636 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.895 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:44.895 "name": "raid_bdev1", 00:08:44.895 "aliases": [ 00:08:44.895 "d937001b-6f16-4675-a336-110969380ee3" 00:08:44.895 ], 00:08:44.895 "product_name": "Raid Volume", 00:08:44.895 "block_size": 512, 00:08:44.895 "num_blocks": 126976, 00:08:44.895 "uuid": "d937001b-6f16-4675-a336-110969380ee3", 00:08:44.895 "assigned_rate_limits": { 00:08:44.895 "rw_ios_per_sec": 0, 00:08:44.895 "rw_mbytes_per_sec": 0, 00:08:44.895 "r_mbytes_per_sec": 0, 00:08:44.895 "w_mbytes_per_sec": 0 00:08:44.895 }, 00:08:44.895 "claimed": false, 00:08:44.895 "zoned": false, 00:08:44.895 "supported_io_types": { 00:08:44.895 "read": true, 00:08:44.895 "write": true, 00:08:44.895 "unmap": true, 00:08:44.895 "flush": true, 00:08:44.895 "reset": true, 00:08:44.895 "nvme_admin": false, 00:08:44.895 "nvme_io": false, 00:08:44.895 "nvme_io_md": false, 00:08:44.895 "write_zeroes": true, 00:08:44.895 "zcopy": false, 00:08:44.895 "get_zone_info": false, 00:08:44.895 "zone_management": false, 00:08:44.895 "zone_append": false, 00:08:44.895 "compare": false, 00:08:44.895 "compare_and_write": false, 00:08:44.895 "abort": false, 00:08:44.895 "seek_hole": false, 00:08:44.895 "seek_data": false, 00:08:44.895 "copy": false, 00:08:44.895 "nvme_iov_md": false 00:08:44.895 }, 00:08:44.895 "memory_domains": [ 00:08:44.895 { 00:08:44.895 "dma_device_id": "system", 00:08:44.895 "dma_device_type": 1 00:08:44.895 }, 00:08:44.895 { 00:08:44.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.895 "dma_device_type": 2 00:08:44.895 }, 00:08:44.895 { 00:08:44.895 "dma_device_id": "system", 00:08:44.895 "dma_device_type": 1 00:08:44.895 }, 00:08:44.895 { 00:08:44.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.895 "dma_device_type": 2 00:08:44.895 } 00:08:44.895 ], 00:08:44.895 "driver_specific": { 00:08:44.895 "raid": { 00:08:44.895 "uuid": "d937001b-6f16-4675-a336-110969380ee3", 00:08:44.895 "strip_size_kb": 64, 00:08:44.895 "state": "online", 00:08:44.895 "raid_level": "raid0", 00:08:44.895 "superblock": true, 00:08:44.895 "num_base_bdevs": 2, 00:08:44.895 "num_base_bdevs_discovered": 2, 00:08:44.895 "num_base_bdevs_operational": 2, 00:08:44.895 "base_bdevs_list": [ 00:08:44.895 { 00:08:44.895 "name": "pt1", 00:08:44.895 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.895 "is_configured": true, 00:08:44.895 "data_offset": 2048, 00:08:44.895 "data_size": 63488 00:08:44.895 }, 00:08:44.895 { 00:08:44.895 "name": "pt2", 00:08:44.895 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.895 "is_configured": true, 00:08:44.895 "data_offset": 2048, 00:08:44.895 "data_size": 63488 00:08:44.895 } 00:08:44.895 ] 00:08:44.895 } 00:08:44.895 } 00:08:44.895 }' 00:08:44.895 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:44.895 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:44.895 pt2' 00:08:44.895 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.895 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:44.895 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.895 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:44.895 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.895 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.895 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.895 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.896 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.896 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.896 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.896 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:44.896 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.896 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.896 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.896 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.896 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.896 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.896 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:44.896 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:44.896 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.896 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.896 [2024-11-20 14:19:23.829265] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.896 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.896 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d937001b-6f16-4675-a336-110969380ee3 00:08:44.896 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d937001b-6f16-4675-a336-110969380ee3 ']' 00:08:44.896 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:44.896 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.896 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.155 [2024-11-20 14:19:23.876894] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:45.155 [2024-11-20 14:19:23.876930] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:45.155 [2024-11-20 14:19:23.877045] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.155 [2024-11-20 14:19:23.877113] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.155 [2024-11-20 14:19:23.877136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:45.155 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.155 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.155 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.155 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:45.155 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.155 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.155 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:45.155 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:45.155 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:45.155 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:45.155 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.155 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.155 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.155 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:45.155 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:45.155 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.155 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.155 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.155 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:45.155 14:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:45.155 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.155 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.155 14:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.155 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:45.155 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:45.155 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:45.155 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:45.155 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:45.155 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.156 [2024-11-20 14:19:24.016996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:45.156 [2024-11-20 14:19:24.019526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:45.156 [2024-11-20 14:19:24.019651] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:45.156 [2024-11-20 14:19:24.019742] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:45.156 [2024-11-20 14:19:24.019769] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:45.156 [2024-11-20 14:19:24.019788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:45.156 request: 00:08:45.156 { 00:08:45.156 "name": "raid_bdev1", 00:08:45.156 "raid_level": "raid0", 00:08:45.156 "base_bdevs": [ 00:08:45.156 "malloc1", 00:08:45.156 "malloc2" 00:08:45.156 ], 00:08:45.156 "strip_size_kb": 64, 00:08:45.156 "superblock": false, 00:08:45.156 "method": "bdev_raid_create", 00:08:45.156 "req_id": 1 00:08:45.156 } 00:08:45.156 Got JSON-RPC error response 00:08:45.156 response: 00:08:45.156 { 00:08:45.156 "code": -17, 00:08:45.156 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:45.156 } 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.156 [2024-11-20 14:19:24.080970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:45.156 [2024-11-20 14:19:24.081063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.156 [2024-11-20 14:19:24.081099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:45.156 [2024-11-20 14:19:24.081116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.156 [2024-11-20 14:19:24.084071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.156 [2024-11-20 14:19:24.084119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:45.156 [2024-11-20 14:19:24.084225] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:45.156 [2024-11-20 14:19:24.084297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:45.156 pt1 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.156 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.415 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.415 "name": "raid_bdev1", 00:08:45.415 "uuid": "d937001b-6f16-4675-a336-110969380ee3", 00:08:45.415 "strip_size_kb": 64, 00:08:45.415 "state": "configuring", 00:08:45.415 "raid_level": "raid0", 00:08:45.415 "superblock": true, 00:08:45.415 "num_base_bdevs": 2, 00:08:45.415 "num_base_bdevs_discovered": 1, 00:08:45.415 "num_base_bdevs_operational": 2, 00:08:45.415 "base_bdevs_list": [ 00:08:45.415 { 00:08:45.415 "name": "pt1", 00:08:45.415 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:45.415 "is_configured": true, 00:08:45.415 "data_offset": 2048, 00:08:45.415 "data_size": 63488 00:08:45.415 }, 00:08:45.415 { 00:08:45.415 "name": null, 00:08:45.415 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.415 "is_configured": false, 00:08:45.415 "data_offset": 2048, 00:08:45.415 "data_size": 63488 00:08:45.415 } 00:08:45.415 ] 00:08:45.415 }' 00:08:45.415 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.415 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.674 [2024-11-20 14:19:24.597121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:45.674 [2024-11-20 14:19:24.597224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.674 [2024-11-20 14:19:24.597256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:45.674 [2024-11-20 14:19:24.597274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.674 [2024-11-20 14:19:24.597849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.674 [2024-11-20 14:19:24.597899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:45.674 [2024-11-20 14:19:24.598028] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:45.674 [2024-11-20 14:19:24.598071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:45.674 [2024-11-20 14:19:24.598215] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:45.674 [2024-11-20 14:19:24.598247] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:45.674 [2024-11-20 14:19:24.598560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:45.674 [2024-11-20 14:19:24.598749] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:45.674 [2024-11-20 14:19:24.598777] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:45.674 [2024-11-20 14:19:24.598960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.674 pt2 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.674 "name": "raid_bdev1", 00:08:45.674 "uuid": "d937001b-6f16-4675-a336-110969380ee3", 00:08:45.674 "strip_size_kb": 64, 00:08:45.674 "state": "online", 00:08:45.674 "raid_level": "raid0", 00:08:45.674 "superblock": true, 00:08:45.674 "num_base_bdevs": 2, 00:08:45.674 "num_base_bdevs_discovered": 2, 00:08:45.674 "num_base_bdevs_operational": 2, 00:08:45.674 "base_bdevs_list": [ 00:08:45.674 { 00:08:45.674 "name": "pt1", 00:08:45.674 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:45.674 "is_configured": true, 00:08:45.674 "data_offset": 2048, 00:08:45.674 "data_size": 63488 00:08:45.674 }, 00:08:45.674 { 00:08:45.674 "name": "pt2", 00:08:45.674 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.674 "is_configured": true, 00:08:45.674 "data_offset": 2048, 00:08:45.674 "data_size": 63488 00:08:45.674 } 00:08:45.674 ] 00:08:45.674 }' 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.674 14:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.242 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:46.242 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:46.242 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:46.242 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:46.242 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:46.242 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:46.242 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:46.242 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:46.242 14:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.242 14:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.242 [2024-11-20 14:19:25.105552] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.242 14:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.242 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:46.242 "name": "raid_bdev1", 00:08:46.242 "aliases": [ 00:08:46.242 "d937001b-6f16-4675-a336-110969380ee3" 00:08:46.242 ], 00:08:46.242 "product_name": "Raid Volume", 00:08:46.242 "block_size": 512, 00:08:46.242 "num_blocks": 126976, 00:08:46.242 "uuid": "d937001b-6f16-4675-a336-110969380ee3", 00:08:46.242 "assigned_rate_limits": { 00:08:46.242 "rw_ios_per_sec": 0, 00:08:46.242 "rw_mbytes_per_sec": 0, 00:08:46.242 "r_mbytes_per_sec": 0, 00:08:46.242 "w_mbytes_per_sec": 0 00:08:46.242 }, 00:08:46.242 "claimed": false, 00:08:46.242 "zoned": false, 00:08:46.242 "supported_io_types": { 00:08:46.242 "read": true, 00:08:46.242 "write": true, 00:08:46.242 "unmap": true, 00:08:46.242 "flush": true, 00:08:46.242 "reset": true, 00:08:46.242 "nvme_admin": false, 00:08:46.242 "nvme_io": false, 00:08:46.242 "nvme_io_md": false, 00:08:46.242 "write_zeroes": true, 00:08:46.242 "zcopy": false, 00:08:46.242 "get_zone_info": false, 00:08:46.242 "zone_management": false, 00:08:46.242 "zone_append": false, 00:08:46.242 "compare": false, 00:08:46.242 "compare_and_write": false, 00:08:46.242 "abort": false, 00:08:46.242 "seek_hole": false, 00:08:46.242 "seek_data": false, 00:08:46.242 "copy": false, 00:08:46.242 "nvme_iov_md": false 00:08:46.242 }, 00:08:46.242 "memory_domains": [ 00:08:46.242 { 00:08:46.242 "dma_device_id": "system", 00:08:46.242 "dma_device_type": 1 00:08:46.242 }, 00:08:46.242 { 00:08:46.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.242 "dma_device_type": 2 00:08:46.242 }, 00:08:46.242 { 00:08:46.242 "dma_device_id": "system", 00:08:46.242 "dma_device_type": 1 00:08:46.242 }, 00:08:46.242 { 00:08:46.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.242 "dma_device_type": 2 00:08:46.242 } 00:08:46.242 ], 00:08:46.242 "driver_specific": { 00:08:46.242 "raid": { 00:08:46.242 "uuid": "d937001b-6f16-4675-a336-110969380ee3", 00:08:46.242 "strip_size_kb": 64, 00:08:46.242 "state": "online", 00:08:46.242 "raid_level": "raid0", 00:08:46.242 "superblock": true, 00:08:46.242 "num_base_bdevs": 2, 00:08:46.242 "num_base_bdevs_discovered": 2, 00:08:46.242 "num_base_bdevs_operational": 2, 00:08:46.242 "base_bdevs_list": [ 00:08:46.242 { 00:08:46.242 "name": "pt1", 00:08:46.242 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:46.242 "is_configured": true, 00:08:46.242 "data_offset": 2048, 00:08:46.242 "data_size": 63488 00:08:46.243 }, 00:08:46.243 { 00:08:46.243 "name": "pt2", 00:08:46.243 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.243 "is_configured": true, 00:08:46.243 "data_offset": 2048, 00:08:46.243 "data_size": 63488 00:08:46.243 } 00:08:46.243 ] 00:08:46.243 } 00:08:46.243 } 00:08:46.243 }' 00:08:46.243 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:46.243 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:46.243 pt2' 00:08:46.243 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:46.503 [2024-11-20 14:19:25.361564] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d937001b-6f16-4675-a336-110969380ee3 '!=' d937001b-6f16-4675-a336-110969380ee3 ']' 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61168 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61168 ']' 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61168 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61168 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.503 killing process with pid 61168 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61168' 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61168 00:08:46.503 [2024-11-20 14:19:25.448318] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.503 14:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61168 00:08:46.503 [2024-11-20 14:19:25.448438] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.503 [2024-11-20 14:19:25.448512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.503 [2024-11-20 14:19:25.448537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:46.763 [2024-11-20 14:19:25.633037] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.700 14:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:47.700 00:08:47.700 real 0m4.766s 00:08:47.700 user 0m6.950s 00:08:47.700 sys 0m0.731s 00:08:47.700 14:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.700 ************************************ 00:08:47.700 END TEST raid_superblock_test 00:08:47.700 14:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.700 ************************************ 00:08:47.959 14:19:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:08:47.959 14:19:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:47.959 14:19:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.959 14:19:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.959 ************************************ 00:08:47.959 START TEST raid_read_error_test 00:08:47.959 ************************************ 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ztiBQNiRVi 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61379 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61379 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61379 ']' 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.959 14:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.959 [2024-11-20 14:19:26.839840] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:08:47.959 [2024-11-20 14:19:26.840016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61379 ] 00:08:48.217 [2024-11-20 14:19:27.017261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.217 [2024-11-20 14:19:27.181301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.477 [2024-11-20 14:19:27.380869] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.477 [2024-11-20 14:19:27.380966] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.045 BaseBdev1_malloc 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.045 true 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.045 [2024-11-20 14:19:27.874579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:49.045 [2024-11-20 14:19:27.874646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.045 [2024-11-20 14:19:27.874675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:49.045 [2024-11-20 14:19:27.874692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.045 [2024-11-20 14:19:27.877496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.045 [2024-11-20 14:19:27.877562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:49.045 BaseBdev1 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.045 BaseBdev2_malloc 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.045 true 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.045 [2024-11-20 14:19:27.929683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:49.045 [2024-11-20 14:19:27.929781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.045 [2024-11-20 14:19:27.929806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:49.045 [2024-11-20 14:19:27.929823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.045 [2024-11-20 14:19:27.932705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.045 [2024-11-20 14:19:27.932785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:49.045 BaseBdev2 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.045 [2024-11-20 14:19:27.937745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.045 [2024-11-20 14:19:27.940302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:49.045 [2024-11-20 14:19:27.940620] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:49.045 [2024-11-20 14:19:27.940653] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:49.045 [2024-11-20 14:19:27.940949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:49.045 [2024-11-20 14:19:27.941196] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:49.045 [2024-11-20 14:19:27.941228] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:49.045 [2024-11-20 14:19:27.941418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.045 14:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.045 "name": "raid_bdev1", 00:08:49.045 "uuid": "7e4080e8-a6e0-4ebb-b911-7c193f0b4001", 00:08:49.045 "strip_size_kb": 64, 00:08:49.045 "state": "online", 00:08:49.045 "raid_level": "raid0", 00:08:49.045 "superblock": true, 00:08:49.045 "num_base_bdevs": 2, 00:08:49.045 "num_base_bdevs_discovered": 2, 00:08:49.045 "num_base_bdevs_operational": 2, 00:08:49.045 "base_bdevs_list": [ 00:08:49.045 { 00:08:49.045 "name": "BaseBdev1", 00:08:49.045 "uuid": "34a8e9d2-b8dd-57c7-9580-3b42b90f6196", 00:08:49.045 "is_configured": true, 00:08:49.045 "data_offset": 2048, 00:08:49.046 "data_size": 63488 00:08:49.046 }, 00:08:49.046 { 00:08:49.046 "name": "BaseBdev2", 00:08:49.046 "uuid": "73bade4e-7b92-55fc-a477-7f573351ad0e", 00:08:49.046 "is_configured": true, 00:08:49.046 "data_offset": 2048, 00:08:49.046 "data_size": 63488 00:08:49.046 } 00:08:49.046 ] 00:08:49.046 }' 00:08:49.046 14:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.046 14:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.654 14:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:49.654 14:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:49.654 [2024-11-20 14:19:28.563331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:50.587 14:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:50.587 14:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.587 14:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.587 14:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.587 14:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:50.587 14:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:50.588 14:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:50.588 14:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:50.588 14:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.588 14:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.588 14:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.588 14:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.588 14:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:50.588 14:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.588 14:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.588 14:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.588 14:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.588 14:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.588 14:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.588 14:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.588 14:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.588 14:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.588 14:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.588 "name": "raid_bdev1", 00:08:50.588 "uuid": "7e4080e8-a6e0-4ebb-b911-7c193f0b4001", 00:08:50.588 "strip_size_kb": 64, 00:08:50.588 "state": "online", 00:08:50.588 "raid_level": "raid0", 00:08:50.588 "superblock": true, 00:08:50.588 "num_base_bdevs": 2, 00:08:50.588 "num_base_bdevs_discovered": 2, 00:08:50.588 "num_base_bdevs_operational": 2, 00:08:50.588 "base_bdevs_list": [ 00:08:50.588 { 00:08:50.588 "name": "BaseBdev1", 00:08:50.588 "uuid": "34a8e9d2-b8dd-57c7-9580-3b42b90f6196", 00:08:50.588 "is_configured": true, 00:08:50.588 "data_offset": 2048, 00:08:50.588 "data_size": 63488 00:08:50.588 }, 00:08:50.588 { 00:08:50.588 "name": "BaseBdev2", 00:08:50.588 "uuid": "73bade4e-7b92-55fc-a477-7f573351ad0e", 00:08:50.588 "is_configured": true, 00:08:50.588 "data_offset": 2048, 00:08:50.588 "data_size": 63488 00:08:50.588 } 00:08:50.588 ] 00:08:50.588 }' 00:08:50.588 14:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.588 14:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.154 14:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:51.154 14:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.154 14:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.154 [2024-11-20 14:19:29.953282] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.154 [2024-11-20 14:19:29.953328] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.154 [2024-11-20 14:19:29.956651] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.154 [2024-11-20 14:19:29.956727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.154 [2024-11-20 14:19:29.956770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.154 [2024-11-20 14:19:29.956788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:51.154 14:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.154 { 00:08:51.154 "results": [ 00:08:51.154 { 00:08:51.154 "job": "raid_bdev1", 00:08:51.154 "core_mask": "0x1", 00:08:51.154 "workload": "randrw", 00:08:51.154 "percentage": 50, 00:08:51.154 "status": "finished", 00:08:51.154 "queue_depth": 1, 00:08:51.154 "io_size": 131072, 00:08:51.154 "runtime": 1.387579, 00:08:51.154 "iops": 11153.959522304676, 00:08:51.154 "mibps": 1394.2449402880845, 00:08:51.154 "io_failed": 1, 00:08:51.154 "io_timeout": 0, 00:08:51.154 "avg_latency_us": 124.63440848594486, 00:08:51.154 "min_latency_us": 38.4, 00:08:51.154 "max_latency_us": 1899.0545454545454 00:08:51.154 } 00:08:51.154 ], 00:08:51.154 "core_count": 1 00:08:51.154 } 00:08:51.154 14:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61379 00:08:51.154 14:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61379 ']' 00:08:51.154 14:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61379 00:08:51.154 14:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:51.154 14:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.154 14:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61379 00:08:51.155 14:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:51.155 14:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:51.155 killing process with pid 61379 00:08:51.155 14:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61379' 00:08:51.155 14:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61379 00:08:51.155 [2024-11-20 14:19:29.994025] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:51.155 14:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61379 00:08:51.155 [2024-11-20 14:19:30.111344] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:52.532 14:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ztiBQNiRVi 00:08:52.532 14:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:52.532 14:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:52.532 14:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:52.532 14:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:52.532 14:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:52.532 14:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:52.532 14:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:52.532 00:08:52.532 real 0m4.479s 00:08:52.532 user 0m5.576s 00:08:52.532 sys 0m0.538s 00:08:52.532 14:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.532 14:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.532 ************************************ 00:08:52.532 END TEST raid_read_error_test 00:08:52.532 ************************************ 00:08:52.532 14:19:31 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:08:52.532 14:19:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:52.532 14:19:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.532 14:19:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:52.532 ************************************ 00:08:52.532 START TEST raid_write_error_test 00:08:52.532 ************************************ 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jETQfsWO8a 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61525 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61525 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61525 ']' 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.532 14:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.533 14:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.533 [2024-11-20 14:19:31.366920] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:08:52.533 [2024-11-20 14:19:31.367115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61525 ] 00:08:52.791 [2024-11-20 14:19:31.541330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.791 [2024-11-20 14:19:31.671088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.050 [2024-11-20 14:19:31.874821] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.050 [2024-11-20 14:19:31.874865] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.617 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.617 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:53.617 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:53.617 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:53.617 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.617 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.617 BaseBdev1_malloc 00:08:53.617 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.617 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:53.617 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.617 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.617 true 00:08:53.617 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.617 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:53.617 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.617 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.617 [2024-11-20 14:19:32.359949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:53.617 [2024-11-20 14:19:32.360033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.617 [2024-11-20 14:19:32.360063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:53.617 [2024-11-20 14:19:32.360081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.617 [2024-11-20 14:19:32.362806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.617 [2024-11-20 14:19:32.362872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:53.617 BaseBdev1 00:08:53.617 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.617 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:53.617 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:53.617 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.618 BaseBdev2_malloc 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.618 true 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.618 [2024-11-20 14:19:32.415172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:53.618 [2024-11-20 14:19:32.415242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.618 [2024-11-20 14:19:32.415266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:53.618 [2024-11-20 14:19:32.415283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.618 [2024-11-20 14:19:32.418010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.618 [2024-11-20 14:19:32.418071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:53.618 BaseBdev2 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.618 [2024-11-20 14:19:32.423248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.618 [2024-11-20 14:19:32.425645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.618 [2024-11-20 14:19:32.425915] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:53.618 [2024-11-20 14:19:32.425952] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:53.618 [2024-11-20 14:19:32.426285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:53.618 [2024-11-20 14:19:32.426513] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:53.618 [2024-11-20 14:19:32.426545] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:53.618 [2024-11-20 14:19:32.426737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.618 "name": "raid_bdev1", 00:08:53.618 "uuid": "07e2771a-acc0-4043-a9d9-63264a57c17f", 00:08:53.618 "strip_size_kb": 64, 00:08:53.618 "state": "online", 00:08:53.618 "raid_level": "raid0", 00:08:53.618 "superblock": true, 00:08:53.618 "num_base_bdevs": 2, 00:08:53.618 "num_base_bdevs_discovered": 2, 00:08:53.618 "num_base_bdevs_operational": 2, 00:08:53.618 "base_bdevs_list": [ 00:08:53.618 { 00:08:53.618 "name": "BaseBdev1", 00:08:53.618 "uuid": "33b6f242-e1da-554a-ba44-0bcd3a1c6f45", 00:08:53.618 "is_configured": true, 00:08:53.618 "data_offset": 2048, 00:08:53.618 "data_size": 63488 00:08:53.618 }, 00:08:53.618 { 00:08:53.618 "name": "BaseBdev2", 00:08:53.618 "uuid": "9e19d5c8-d11d-5c1e-be66-b5b7dda6b35d", 00:08:53.618 "is_configured": true, 00:08:53.618 "data_offset": 2048, 00:08:53.618 "data_size": 63488 00:08:53.618 } 00:08:53.618 ] 00:08:53.618 }' 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.618 14:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.184 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:54.184 14:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:54.184 [2024-11-20 14:19:33.080779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:55.120 14:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:55.120 14:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.120 14:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.120 14:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.120 14:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:55.120 14:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:55.120 14:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:55.120 14:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:55.120 14:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.120 14:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.120 14:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.120 14:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.120 14:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.120 14:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.120 14:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.120 14:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.120 14:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.120 14:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.120 14:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.120 14:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.120 14:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.120 14:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.120 14:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.120 "name": "raid_bdev1", 00:08:55.120 "uuid": "07e2771a-acc0-4043-a9d9-63264a57c17f", 00:08:55.120 "strip_size_kb": 64, 00:08:55.120 "state": "online", 00:08:55.120 "raid_level": "raid0", 00:08:55.120 "superblock": true, 00:08:55.120 "num_base_bdevs": 2, 00:08:55.120 "num_base_bdevs_discovered": 2, 00:08:55.120 "num_base_bdevs_operational": 2, 00:08:55.120 "base_bdevs_list": [ 00:08:55.120 { 00:08:55.121 "name": "BaseBdev1", 00:08:55.121 "uuid": "33b6f242-e1da-554a-ba44-0bcd3a1c6f45", 00:08:55.121 "is_configured": true, 00:08:55.121 "data_offset": 2048, 00:08:55.121 "data_size": 63488 00:08:55.121 }, 00:08:55.121 { 00:08:55.121 "name": "BaseBdev2", 00:08:55.121 "uuid": "9e19d5c8-d11d-5c1e-be66-b5b7dda6b35d", 00:08:55.121 "is_configured": true, 00:08:55.121 "data_offset": 2048, 00:08:55.121 "data_size": 63488 00:08:55.121 } 00:08:55.121 ] 00:08:55.121 }' 00:08:55.121 14:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.121 14:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.688 14:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:55.688 14:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.688 14:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.688 [2024-11-20 14:19:34.499755] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:55.688 [2024-11-20 14:19:34.499964] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.688 [2024-11-20 14:19:34.503457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.688 [2024-11-20 14:19:34.503639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.688 [2024-11-20 14:19:34.503730] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.688 [2024-11-20 14:19:34.503882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:55.688 { 00:08:55.688 "results": [ 00:08:55.688 { 00:08:55.688 "job": "raid_bdev1", 00:08:55.688 "core_mask": "0x1", 00:08:55.688 "workload": "randrw", 00:08:55.688 "percentage": 50, 00:08:55.688 "status": "finished", 00:08:55.688 "queue_depth": 1, 00:08:55.688 "io_size": 131072, 00:08:55.688 "runtime": 1.416791, 00:08:55.688 "iops": 10975.507326062912, 00:08:55.688 "mibps": 1371.938415757864, 00:08:55.688 "io_failed": 1, 00:08:55.688 "io_timeout": 0, 00:08:55.688 "avg_latency_us": 126.33027048830533, 00:08:55.688 "min_latency_us": 38.86545454545455, 00:08:55.688 "max_latency_us": 1936.290909090909 00:08:55.688 } 00:08:55.688 ], 00:08:55.688 "core_count": 1 00:08:55.688 } 00:08:55.688 14:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.688 14:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61525 00:08:55.688 14:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61525 ']' 00:08:55.688 14:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61525 00:08:55.688 14:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:55.688 14:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.688 14:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61525 00:08:55.688 14:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:55.688 14:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:55.688 14:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61525' 00:08:55.688 killing process with pid 61525 00:08:55.688 14:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61525 00:08:55.688 [2024-11-20 14:19:34.541794] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:55.688 14:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61525 00:08:55.688 [2024-11-20 14:19:34.661185] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:57.064 14:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jETQfsWO8a 00:08:57.064 14:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:57.064 14:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:57.064 14:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:08:57.064 14:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:57.064 14:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:57.064 14:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:57.064 14:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:08:57.064 00:08:57.064 real 0m4.539s 00:08:57.064 user 0m5.696s 00:08:57.064 sys 0m0.529s 00:08:57.064 ************************************ 00:08:57.064 END TEST raid_write_error_test 00:08:57.064 ************************************ 00:08:57.064 14:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.065 14:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.065 14:19:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:57.065 14:19:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:57.065 14:19:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:57.065 14:19:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.065 14:19:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:57.065 ************************************ 00:08:57.065 START TEST raid_state_function_test 00:08:57.065 ************************************ 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:57.065 Process raid pid: 61663 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61663 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61663' 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61663 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61663 ']' 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.065 14:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.065 [2024-11-20 14:19:35.969089] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:08:57.065 [2024-11-20 14:19:35.969270] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.323 [2024-11-20 14:19:36.159402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.323 [2024-11-20 14:19:36.291591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.581 [2024-11-20 14:19:36.498641] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.581 [2024-11-20 14:19:36.498690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.148 14:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.148 14:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:58.148 14:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:58.148 14:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.148 14:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.148 [2024-11-20 14:19:36.946180] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.148 [2024-11-20 14:19:36.946383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.148 [2024-11-20 14:19:36.946413] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.148 [2024-11-20 14:19:36.946432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.148 14:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.148 14:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:58.148 14:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.148 14:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.148 14:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.148 14:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.148 14:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:58.148 14:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.148 14:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.148 14:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.148 14:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.148 14:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.148 14:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.148 14:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.148 14:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.148 14:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.148 14:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.148 "name": "Existed_Raid", 00:08:58.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.149 "strip_size_kb": 64, 00:08:58.149 "state": "configuring", 00:08:58.149 "raid_level": "concat", 00:08:58.149 "superblock": false, 00:08:58.149 "num_base_bdevs": 2, 00:08:58.149 "num_base_bdevs_discovered": 0, 00:08:58.149 "num_base_bdevs_operational": 2, 00:08:58.149 "base_bdevs_list": [ 00:08:58.149 { 00:08:58.149 "name": "BaseBdev1", 00:08:58.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.149 "is_configured": false, 00:08:58.149 "data_offset": 0, 00:08:58.149 "data_size": 0 00:08:58.149 }, 00:08:58.149 { 00:08:58.149 "name": "BaseBdev2", 00:08:58.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.149 "is_configured": false, 00:08:58.149 "data_offset": 0, 00:08:58.149 "data_size": 0 00:08:58.149 } 00:08:58.149 ] 00:08:58.149 }' 00:08:58.149 14:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.149 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.715 [2024-11-20 14:19:37.434287] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:58.715 [2024-11-20 14:19:37.434328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.715 [2024-11-20 14:19:37.442259] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.715 [2024-11-20 14:19:37.442312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.715 [2024-11-20 14:19:37.442327] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.715 [2024-11-20 14:19:37.442346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.715 [2024-11-20 14:19:37.487638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.715 BaseBdev1 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.715 [ 00:08:58.715 { 00:08:58.715 "name": "BaseBdev1", 00:08:58.715 "aliases": [ 00:08:58.715 "61e511d6-1b12-4134-8ee2-902a35d64b61" 00:08:58.715 ], 00:08:58.715 "product_name": "Malloc disk", 00:08:58.715 "block_size": 512, 00:08:58.715 "num_blocks": 65536, 00:08:58.715 "uuid": "61e511d6-1b12-4134-8ee2-902a35d64b61", 00:08:58.715 "assigned_rate_limits": { 00:08:58.715 "rw_ios_per_sec": 0, 00:08:58.715 "rw_mbytes_per_sec": 0, 00:08:58.715 "r_mbytes_per_sec": 0, 00:08:58.715 "w_mbytes_per_sec": 0 00:08:58.715 }, 00:08:58.715 "claimed": true, 00:08:58.715 "claim_type": "exclusive_write", 00:08:58.715 "zoned": false, 00:08:58.715 "supported_io_types": { 00:08:58.715 "read": true, 00:08:58.715 "write": true, 00:08:58.715 "unmap": true, 00:08:58.715 "flush": true, 00:08:58.715 "reset": true, 00:08:58.715 "nvme_admin": false, 00:08:58.715 "nvme_io": false, 00:08:58.715 "nvme_io_md": false, 00:08:58.715 "write_zeroes": true, 00:08:58.715 "zcopy": true, 00:08:58.715 "get_zone_info": false, 00:08:58.715 "zone_management": false, 00:08:58.715 "zone_append": false, 00:08:58.715 "compare": false, 00:08:58.715 "compare_and_write": false, 00:08:58.715 "abort": true, 00:08:58.715 "seek_hole": false, 00:08:58.715 "seek_data": false, 00:08:58.715 "copy": true, 00:08:58.715 "nvme_iov_md": false 00:08:58.715 }, 00:08:58.715 "memory_domains": [ 00:08:58.715 { 00:08:58.715 "dma_device_id": "system", 00:08:58.715 "dma_device_type": 1 00:08:58.715 }, 00:08:58.715 { 00:08:58.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.715 "dma_device_type": 2 00:08:58.715 } 00:08:58.715 ], 00:08:58.715 "driver_specific": {} 00:08:58.715 } 00:08:58.715 ] 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.715 "name": "Existed_Raid", 00:08:58.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.715 "strip_size_kb": 64, 00:08:58.715 "state": "configuring", 00:08:58.715 "raid_level": "concat", 00:08:58.715 "superblock": false, 00:08:58.715 "num_base_bdevs": 2, 00:08:58.715 "num_base_bdevs_discovered": 1, 00:08:58.715 "num_base_bdevs_operational": 2, 00:08:58.715 "base_bdevs_list": [ 00:08:58.715 { 00:08:58.715 "name": "BaseBdev1", 00:08:58.715 "uuid": "61e511d6-1b12-4134-8ee2-902a35d64b61", 00:08:58.715 "is_configured": true, 00:08:58.715 "data_offset": 0, 00:08:58.715 "data_size": 65536 00:08:58.715 }, 00:08:58.715 { 00:08:58.715 "name": "BaseBdev2", 00:08:58.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.715 "is_configured": false, 00:08:58.715 "data_offset": 0, 00:08:58.715 "data_size": 0 00:08:58.715 } 00:08:58.715 ] 00:08:58.715 }' 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.715 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.283 14:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:59.283 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.283 14:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.283 [2024-11-20 14:19:37.999828] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:59.283 [2024-11-20 14:19:37.999888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.283 [2024-11-20 14:19:38.007864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.283 [2024-11-20 14:19:38.010275] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:59.283 [2024-11-20 14:19:38.010324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.283 "name": "Existed_Raid", 00:08:59.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.283 "strip_size_kb": 64, 00:08:59.283 "state": "configuring", 00:08:59.283 "raid_level": "concat", 00:08:59.283 "superblock": false, 00:08:59.283 "num_base_bdevs": 2, 00:08:59.283 "num_base_bdevs_discovered": 1, 00:08:59.283 "num_base_bdevs_operational": 2, 00:08:59.283 "base_bdevs_list": [ 00:08:59.283 { 00:08:59.283 "name": "BaseBdev1", 00:08:59.283 "uuid": "61e511d6-1b12-4134-8ee2-902a35d64b61", 00:08:59.283 "is_configured": true, 00:08:59.283 "data_offset": 0, 00:08:59.283 "data_size": 65536 00:08:59.283 }, 00:08:59.283 { 00:08:59.283 "name": "BaseBdev2", 00:08:59.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.283 "is_configured": false, 00:08:59.283 "data_offset": 0, 00:08:59.283 "data_size": 0 00:08:59.283 } 00:08:59.283 ] 00:08:59.283 }' 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.283 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.542 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:59.542 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.542 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.800 [2024-11-20 14:19:38.558668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.800 [2024-11-20 14:19:38.558727] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:59.800 [2024-11-20 14:19:38.558740] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:59.800 [2024-11-20 14:19:38.559120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:59.800 [2024-11-20 14:19:38.559343] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:59.800 [2024-11-20 14:19:38.559365] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:59.800 [2024-11-20 14:19:38.559682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.800 BaseBdev2 00:08:59.800 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.800 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:59.800 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:59.800 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.800 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:59.800 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.800 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.801 [ 00:08:59.801 { 00:08:59.801 "name": "BaseBdev2", 00:08:59.801 "aliases": [ 00:08:59.801 "e6d199ab-b0f4-4cc2-be18-f1556ee26a27" 00:08:59.801 ], 00:08:59.801 "product_name": "Malloc disk", 00:08:59.801 "block_size": 512, 00:08:59.801 "num_blocks": 65536, 00:08:59.801 "uuid": "e6d199ab-b0f4-4cc2-be18-f1556ee26a27", 00:08:59.801 "assigned_rate_limits": { 00:08:59.801 "rw_ios_per_sec": 0, 00:08:59.801 "rw_mbytes_per_sec": 0, 00:08:59.801 "r_mbytes_per_sec": 0, 00:08:59.801 "w_mbytes_per_sec": 0 00:08:59.801 }, 00:08:59.801 "claimed": true, 00:08:59.801 "claim_type": "exclusive_write", 00:08:59.801 "zoned": false, 00:08:59.801 "supported_io_types": { 00:08:59.801 "read": true, 00:08:59.801 "write": true, 00:08:59.801 "unmap": true, 00:08:59.801 "flush": true, 00:08:59.801 "reset": true, 00:08:59.801 "nvme_admin": false, 00:08:59.801 "nvme_io": false, 00:08:59.801 "nvme_io_md": false, 00:08:59.801 "write_zeroes": true, 00:08:59.801 "zcopy": true, 00:08:59.801 "get_zone_info": false, 00:08:59.801 "zone_management": false, 00:08:59.801 "zone_append": false, 00:08:59.801 "compare": false, 00:08:59.801 "compare_and_write": false, 00:08:59.801 "abort": true, 00:08:59.801 "seek_hole": false, 00:08:59.801 "seek_data": false, 00:08:59.801 "copy": true, 00:08:59.801 "nvme_iov_md": false 00:08:59.801 }, 00:08:59.801 "memory_domains": [ 00:08:59.801 { 00:08:59.801 "dma_device_id": "system", 00:08:59.801 "dma_device_type": 1 00:08:59.801 }, 00:08:59.801 { 00:08:59.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.801 "dma_device_type": 2 00:08:59.801 } 00:08:59.801 ], 00:08:59.801 "driver_specific": {} 00:08:59.801 } 00:08:59.801 ] 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.801 "name": "Existed_Raid", 00:08:59.801 "uuid": "4122d44b-a063-4382-b8a6-1651ce7971a1", 00:08:59.801 "strip_size_kb": 64, 00:08:59.801 "state": "online", 00:08:59.801 "raid_level": "concat", 00:08:59.801 "superblock": false, 00:08:59.801 "num_base_bdevs": 2, 00:08:59.801 "num_base_bdevs_discovered": 2, 00:08:59.801 "num_base_bdevs_operational": 2, 00:08:59.801 "base_bdevs_list": [ 00:08:59.801 { 00:08:59.801 "name": "BaseBdev1", 00:08:59.801 "uuid": "61e511d6-1b12-4134-8ee2-902a35d64b61", 00:08:59.801 "is_configured": true, 00:08:59.801 "data_offset": 0, 00:08:59.801 "data_size": 65536 00:08:59.801 }, 00:08:59.801 { 00:08:59.801 "name": "BaseBdev2", 00:08:59.801 "uuid": "e6d199ab-b0f4-4cc2-be18-f1556ee26a27", 00:08:59.801 "is_configured": true, 00:08:59.801 "data_offset": 0, 00:08:59.801 "data_size": 65536 00:08:59.801 } 00:08:59.801 ] 00:08:59.801 }' 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.801 14:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.395 [2024-11-20 14:19:39.091213] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:00.395 "name": "Existed_Raid", 00:09:00.395 "aliases": [ 00:09:00.395 "4122d44b-a063-4382-b8a6-1651ce7971a1" 00:09:00.395 ], 00:09:00.395 "product_name": "Raid Volume", 00:09:00.395 "block_size": 512, 00:09:00.395 "num_blocks": 131072, 00:09:00.395 "uuid": "4122d44b-a063-4382-b8a6-1651ce7971a1", 00:09:00.395 "assigned_rate_limits": { 00:09:00.395 "rw_ios_per_sec": 0, 00:09:00.395 "rw_mbytes_per_sec": 0, 00:09:00.395 "r_mbytes_per_sec": 0, 00:09:00.395 "w_mbytes_per_sec": 0 00:09:00.395 }, 00:09:00.395 "claimed": false, 00:09:00.395 "zoned": false, 00:09:00.395 "supported_io_types": { 00:09:00.395 "read": true, 00:09:00.395 "write": true, 00:09:00.395 "unmap": true, 00:09:00.395 "flush": true, 00:09:00.395 "reset": true, 00:09:00.395 "nvme_admin": false, 00:09:00.395 "nvme_io": false, 00:09:00.395 "nvme_io_md": false, 00:09:00.395 "write_zeroes": true, 00:09:00.395 "zcopy": false, 00:09:00.395 "get_zone_info": false, 00:09:00.395 "zone_management": false, 00:09:00.395 "zone_append": false, 00:09:00.395 "compare": false, 00:09:00.395 "compare_and_write": false, 00:09:00.395 "abort": false, 00:09:00.395 "seek_hole": false, 00:09:00.395 "seek_data": false, 00:09:00.395 "copy": false, 00:09:00.395 "nvme_iov_md": false 00:09:00.395 }, 00:09:00.395 "memory_domains": [ 00:09:00.395 { 00:09:00.395 "dma_device_id": "system", 00:09:00.395 "dma_device_type": 1 00:09:00.395 }, 00:09:00.395 { 00:09:00.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.395 "dma_device_type": 2 00:09:00.395 }, 00:09:00.395 { 00:09:00.395 "dma_device_id": "system", 00:09:00.395 "dma_device_type": 1 00:09:00.395 }, 00:09:00.395 { 00:09:00.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.395 "dma_device_type": 2 00:09:00.395 } 00:09:00.395 ], 00:09:00.395 "driver_specific": { 00:09:00.395 "raid": { 00:09:00.395 "uuid": "4122d44b-a063-4382-b8a6-1651ce7971a1", 00:09:00.395 "strip_size_kb": 64, 00:09:00.395 "state": "online", 00:09:00.395 "raid_level": "concat", 00:09:00.395 "superblock": false, 00:09:00.395 "num_base_bdevs": 2, 00:09:00.395 "num_base_bdevs_discovered": 2, 00:09:00.395 "num_base_bdevs_operational": 2, 00:09:00.395 "base_bdevs_list": [ 00:09:00.395 { 00:09:00.395 "name": "BaseBdev1", 00:09:00.395 "uuid": "61e511d6-1b12-4134-8ee2-902a35d64b61", 00:09:00.395 "is_configured": true, 00:09:00.395 "data_offset": 0, 00:09:00.395 "data_size": 65536 00:09:00.395 }, 00:09:00.395 { 00:09:00.395 "name": "BaseBdev2", 00:09:00.395 "uuid": "e6d199ab-b0f4-4cc2-be18-f1556ee26a27", 00:09:00.395 "is_configured": true, 00:09:00.395 "data_offset": 0, 00:09:00.395 "data_size": 65536 00:09:00.395 } 00:09:00.395 ] 00:09:00.395 } 00:09:00.395 } 00:09:00.395 }' 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:00.395 BaseBdev2' 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.395 14:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.395 [2024-11-20 14:19:39.338975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:00.395 [2024-11-20 14:19:39.339032] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:00.395 [2024-11-20 14:19:39.339102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.654 14:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.654 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:00.654 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:00.654 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:00.654 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:00.654 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:00.654 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:00.654 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.654 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:00.654 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.654 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.654 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:00.654 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.654 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.654 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.654 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.654 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.654 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.654 14:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.654 14:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.654 14:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.654 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.654 "name": "Existed_Raid", 00:09:00.654 "uuid": "4122d44b-a063-4382-b8a6-1651ce7971a1", 00:09:00.654 "strip_size_kb": 64, 00:09:00.654 "state": "offline", 00:09:00.654 "raid_level": "concat", 00:09:00.654 "superblock": false, 00:09:00.654 "num_base_bdevs": 2, 00:09:00.654 "num_base_bdevs_discovered": 1, 00:09:00.654 "num_base_bdevs_operational": 1, 00:09:00.654 "base_bdevs_list": [ 00:09:00.654 { 00:09:00.654 "name": null, 00:09:00.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.654 "is_configured": false, 00:09:00.654 "data_offset": 0, 00:09:00.654 "data_size": 65536 00:09:00.654 }, 00:09:00.654 { 00:09:00.654 "name": "BaseBdev2", 00:09:00.654 "uuid": "e6d199ab-b0f4-4cc2-be18-f1556ee26a27", 00:09:00.654 "is_configured": true, 00:09:00.654 "data_offset": 0, 00:09:00.654 "data_size": 65536 00:09:00.654 } 00:09:00.654 ] 00:09:00.654 }' 00:09:00.654 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.654 14:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.221 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:01.221 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:01.221 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.221 14:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.221 14:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.221 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:01.221 14:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.221 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:01.221 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:01.221 14:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:01.221 14:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.221 14:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.221 [2024-11-20 14:19:39.975142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:01.221 [2024-11-20 14:19:39.975340] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:01.221 14:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.221 14:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:01.221 14:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:01.221 14:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.221 14:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.221 14:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:01.221 14:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.221 14:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.221 14:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:01.221 14:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:01.221 14:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:01.221 14:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61663 00:09:01.221 14:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61663 ']' 00:09:01.221 14:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61663 00:09:01.221 14:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:01.221 14:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.221 14:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61663 00:09:01.221 killing process with pid 61663 00:09:01.221 14:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.221 14:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.221 14:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61663' 00:09:01.221 14:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61663 00:09:01.221 [2024-11-20 14:19:40.146677] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.221 14:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61663 00:09:01.221 [2024-11-20 14:19:40.161446] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:02.596 ************************************ 00:09:02.596 END TEST raid_state_function_test 00:09:02.596 ************************************ 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:02.596 00:09:02.596 real 0m5.348s 00:09:02.596 user 0m8.044s 00:09:02.596 sys 0m0.740s 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.596 14:19:41 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:09:02.596 14:19:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:02.596 14:19:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.596 14:19:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:02.596 ************************************ 00:09:02.596 START TEST raid_state_function_test_sb 00:09:02.596 ************************************ 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:02.596 Process raid pid: 61922 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:02.596 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:02.597 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61922 00:09:02.597 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61922' 00:09:02.597 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:02.597 14:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61922 00:09:02.597 14:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61922 ']' 00:09:02.597 14:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.597 14:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.597 14:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.597 14:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.597 14:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.597 [2024-11-20 14:19:41.371364] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:09:02.597 [2024-11-20 14:19:41.371785] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.597 [2024-11-20 14:19:41.556459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.854 [2024-11-20 14:19:41.708547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.112 [2024-11-20 14:19:41.942394] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.112 [2024-11-20 14:19:41.942658] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.679 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.679 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:03.679 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:03.679 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.679 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.679 [2024-11-20 14:19:42.381599] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:03.679 [2024-11-20 14:19:42.381666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:03.679 [2024-11-20 14:19:42.381684] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:03.679 [2024-11-20 14:19:42.381702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:03.679 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.679 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:03.679 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.679 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.679 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.679 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.679 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:03.679 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.679 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.679 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.679 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.679 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.680 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.680 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.680 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.680 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.680 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.680 "name": "Existed_Raid", 00:09:03.680 "uuid": "dc47897d-b0dd-493a-b29e-7e93e48bad6c", 00:09:03.680 "strip_size_kb": 64, 00:09:03.680 "state": "configuring", 00:09:03.680 "raid_level": "concat", 00:09:03.680 "superblock": true, 00:09:03.680 "num_base_bdevs": 2, 00:09:03.680 "num_base_bdevs_discovered": 0, 00:09:03.680 "num_base_bdevs_operational": 2, 00:09:03.680 "base_bdevs_list": [ 00:09:03.680 { 00:09:03.680 "name": "BaseBdev1", 00:09:03.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.680 "is_configured": false, 00:09:03.680 "data_offset": 0, 00:09:03.680 "data_size": 0 00:09:03.680 }, 00:09:03.680 { 00:09:03.680 "name": "BaseBdev2", 00:09:03.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.680 "is_configured": false, 00:09:03.680 "data_offset": 0, 00:09:03.680 "data_size": 0 00:09:03.680 } 00:09:03.680 ] 00:09:03.680 }' 00:09:03.680 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.680 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.939 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:03.939 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.939 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.939 [2024-11-20 14:19:42.885630] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:03.939 [2024-11-20 14:19:42.885671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:03.939 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.939 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:03.939 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.939 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.939 [2024-11-20 14:19:42.893625] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:03.939 [2024-11-20 14:19:42.893678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:03.939 [2024-11-20 14:19:42.893694] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:03.939 [2024-11-20 14:19:42.893713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:03.939 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.939 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:03.939 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.939 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.198 [2024-11-20 14:19:42.938323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:04.198 BaseBdev1 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.198 [ 00:09:04.198 { 00:09:04.198 "name": "BaseBdev1", 00:09:04.198 "aliases": [ 00:09:04.198 "7a98a366-6663-4a8e-9a6e-9ee50f9be56d" 00:09:04.198 ], 00:09:04.198 "product_name": "Malloc disk", 00:09:04.198 "block_size": 512, 00:09:04.198 "num_blocks": 65536, 00:09:04.198 "uuid": "7a98a366-6663-4a8e-9a6e-9ee50f9be56d", 00:09:04.198 "assigned_rate_limits": { 00:09:04.198 "rw_ios_per_sec": 0, 00:09:04.198 "rw_mbytes_per_sec": 0, 00:09:04.198 "r_mbytes_per_sec": 0, 00:09:04.198 "w_mbytes_per_sec": 0 00:09:04.198 }, 00:09:04.198 "claimed": true, 00:09:04.198 "claim_type": "exclusive_write", 00:09:04.198 "zoned": false, 00:09:04.198 "supported_io_types": { 00:09:04.198 "read": true, 00:09:04.198 "write": true, 00:09:04.198 "unmap": true, 00:09:04.198 "flush": true, 00:09:04.198 "reset": true, 00:09:04.198 "nvme_admin": false, 00:09:04.198 "nvme_io": false, 00:09:04.198 "nvme_io_md": false, 00:09:04.198 "write_zeroes": true, 00:09:04.198 "zcopy": true, 00:09:04.198 "get_zone_info": false, 00:09:04.198 "zone_management": false, 00:09:04.198 "zone_append": false, 00:09:04.198 "compare": false, 00:09:04.198 "compare_and_write": false, 00:09:04.198 "abort": true, 00:09:04.198 "seek_hole": false, 00:09:04.198 "seek_data": false, 00:09:04.198 "copy": true, 00:09:04.198 "nvme_iov_md": false 00:09:04.198 }, 00:09:04.198 "memory_domains": [ 00:09:04.198 { 00:09:04.198 "dma_device_id": "system", 00:09:04.198 "dma_device_type": 1 00:09:04.198 }, 00:09:04.198 { 00:09:04.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.198 "dma_device_type": 2 00:09:04.198 } 00:09:04.198 ], 00:09:04.198 "driver_specific": {} 00:09:04.198 } 00:09:04.198 ] 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.198 14:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.198 14:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.198 "name": "Existed_Raid", 00:09:04.198 "uuid": "4a1299ce-f4a2-4bc2-82c7-47a5dfa3a320", 00:09:04.198 "strip_size_kb": 64, 00:09:04.198 "state": "configuring", 00:09:04.198 "raid_level": "concat", 00:09:04.198 "superblock": true, 00:09:04.198 "num_base_bdevs": 2, 00:09:04.198 "num_base_bdevs_discovered": 1, 00:09:04.198 "num_base_bdevs_operational": 2, 00:09:04.198 "base_bdevs_list": [ 00:09:04.198 { 00:09:04.198 "name": "BaseBdev1", 00:09:04.198 "uuid": "7a98a366-6663-4a8e-9a6e-9ee50f9be56d", 00:09:04.198 "is_configured": true, 00:09:04.198 "data_offset": 2048, 00:09:04.198 "data_size": 63488 00:09:04.198 }, 00:09:04.198 { 00:09:04.198 "name": "BaseBdev2", 00:09:04.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.198 "is_configured": false, 00:09:04.198 "data_offset": 0, 00:09:04.198 "data_size": 0 00:09:04.198 } 00:09:04.198 ] 00:09:04.198 }' 00:09:04.198 14:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.198 14:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.765 [2024-11-20 14:19:43.490516] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:04.765 [2024-11-20 14:19:43.490576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.765 [2024-11-20 14:19:43.498562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:04.765 [2024-11-20 14:19:43.501060] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:04.765 [2024-11-20 14:19:43.501113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.765 "name": "Existed_Raid", 00:09:04.765 "uuid": "0d5cc028-480d-4f9c-9e02-d538887795c9", 00:09:04.765 "strip_size_kb": 64, 00:09:04.765 "state": "configuring", 00:09:04.765 "raid_level": "concat", 00:09:04.765 "superblock": true, 00:09:04.765 "num_base_bdevs": 2, 00:09:04.765 "num_base_bdevs_discovered": 1, 00:09:04.765 "num_base_bdevs_operational": 2, 00:09:04.765 "base_bdevs_list": [ 00:09:04.765 { 00:09:04.765 "name": "BaseBdev1", 00:09:04.765 "uuid": "7a98a366-6663-4a8e-9a6e-9ee50f9be56d", 00:09:04.765 "is_configured": true, 00:09:04.765 "data_offset": 2048, 00:09:04.765 "data_size": 63488 00:09:04.765 }, 00:09:04.765 { 00:09:04.765 "name": "BaseBdev2", 00:09:04.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.765 "is_configured": false, 00:09:04.765 "data_offset": 0, 00:09:04.765 "data_size": 0 00:09:04.765 } 00:09:04.765 ] 00:09:04.765 }' 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.765 14:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.332 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:05.332 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.332 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.332 [2024-11-20 14:19:44.056966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:05.332 [2024-11-20 14:19:44.057543] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:05.332 BaseBdev2 00:09:05.332 [2024-11-20 14:19:44.057683] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:05.332 [2024-11-20 14:19:44.058056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:05.332 [2024-11-20 14:19:44.058256] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:05.332 [2024-11-20 14:19:44.058280] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:05.332 [2024-11-20 14:19:44.058448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.332 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.332 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:05.332 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:05.332 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:05.332 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:05.332 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:05.332 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:05.332 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:05.332 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.332 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.332 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.332 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:05.332 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.332 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.332 [ 00:09:05.332 { 00:09:05.332 "name": "BaseBdev2", 00:09:05.332 "aliases": [ 00:09:05.332 "d4e23c9c-d26a-472e-9204-4310f699780f" 00:09:05.332 ], 00:09:05.332 "product_name": "Malloc disk", 00:09:05.332 "block_size": 512, 00:09:05.332 "num_blocks": 65536, 00:09:05.332 "uuid": "d4e23c9c-d26a-472e-9204-4310f699780f", 00:09:05.332 "assigned_rate_limits": { 00:09:05.332 "rw_ios_per_sec": 0, 00:09:05.332 "rw_mbytes_per_sec": 0, 00:09:05.332 "r_mbytes_per_sec": 0, 00:09:05.332 "w_mbytes_per_sec": 0 00:09:05.332 }, 00:09:05.332 "claimed": true, 00:09:05.332 "claim_type": "exclusive_write", 00:09:05.332 "zoned": false, 00:09:05.332 "supported_io_types": { 00:09:05.332 "read": true, 00:09:05.332 "write": true, 00:09:05.332 "unmap": true, 00:09:05.332 "flush": true, 00:09:05.332 "reset": true, 00:09:05.332 "nvme_admin": false, 00:09:05.332 "nvme_io": false, 00:09:05.332 "nvme_io_md": false, 00:09:05.332 "write_zeroes": true, 00:09:05.332 "zcopy": true, 00:09:05.332 "get_zone_info": false, 00:09:05.332 "zone_management": false, 00:09:05.332 "zone_append": false, 00:09:05.332 "compare": false, 00:09:05.332 "compare_and_write": false, 00:09:05.332 "abort": true, 00:09:05.332 "seek_hole": false, 00:09:05.333 "seek_data": false, 00:09:05.333 "copy": true, 00:09:05.333 "nvme_iov_md": false 00:09:05.333 }, 00:09:05.333 "memory_domains": [ 00:09:05.333 { 00:09:05.333 "dma_device_id": "system", 00:09:05.333 "dma_device_type": 1 00:09:05.333 }, 00:09:05.333 { 00:09:05.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.333 "dma_device_type": 2 00:09:05.333 } 00:09:05.333 ], 00:09:05.333 "driver_specific": {} 00:09:05.333 } 00:09:05.333 ] 00:09:05.333 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.333 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:05.333 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:05.333 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.333 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:05.333 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.333 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.333 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.333 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.333 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:05.333 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.333 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.333 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.333 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.333 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.333 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.333 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.333 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.333 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.333 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.333 "name": "Existed_Raid", 00:09:05.333 "uuid": "0d5cc028-480d-4f9c-9e02-d538887795c9", 00:09:05.333 "strip_size_kb": 64, 00:09:05.333 "state": "online", 00:09:05.333 "raid_level": "concat", 00:09:05.333 "superblock": true, 00:09:05.333 "num_base_bdevs": 2, 00:09:05.333 "num_base_bdevs_discovered": 2, 00:09:05.333 "num_base_bdevs_operational": 2, 00:09:05.333 "base_bdevs_list": [ 00:09:05.333 { 00:09:05.333 "name": "BaseBdev1", 00:09:05.333 "uuid": "7a98a366-6663-4a8e-9a6e-9ee50f9be56d", 00:09:05.333 "is_configured": true, 00:09:05.333 "data_offset": 2048, 00:09:05.333 "data_size": 63488 00:09:05.333 }, 00:09:05.333 { 00:09:05.333 "name": "BaseBdev2", 00:09:05.333 "uuid": "d4e23c9c-d26a-472e-9204-4310f699780f", 00:09:05.333 "is_configured": true, 00:09:05.333 "data_offset": 2048, 00:09:05.333 "data_size": 63488 00:09:05.333 } 00:09:05.333 ] 00:09:05.333 }' 00:09:05.333 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.333 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.901 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:05.901 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:05.901 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:05.901 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:05.901 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:05.901 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:05.901 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:05.901 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.901 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:05.901 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.901 [2024-11-20 14:19:44.593500] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.901 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.901 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:05.901 "name": "Existed_Raid", 00:09:05.901 "aliases": [ 00:09:05.901 "0d5cc028-480d-4f9c-9e02-d538887795c9" 00:09:05.901 ], 00:09:05.901 "product_name": "Raid Volume", 00:09:05.901 "block_size": 512, 00:09:05.901 "num_blocks": 126976, 00:09:05.901 "uuid": "0d5cc028-480d-4f9c-9e02-d538887795c9", 00:09:05.901 "assigned_rate_limits": { 00:09:05.901 "rw_ios_per_sec": 0, 00:09:05.901 "rw_mbytes_per_sec": 0, 00:09:05.901 "r_mbytes_per_sec": 0, 00:09:05.901 "w_mbytes_per_sec": 0 00:09:05.901 }, 00:09:05.901 "claimed": false, 00:09:05.901 "zoned": false, 00:09:05.901 "supported_io_types": { 00:09:05.901 "read": true, 00:09:05.901 "write": true, 00:09:05.901 "unmap": true, 00:09:05.901 "flush": true, 00:09:05.901 "reset": true, 00:09:05.901 "nvme_admin": false, 00:09:05.901 "nvme_io": false, 00:09:05.901 "nvme_io_md": false, 00:09:05.901 "write_zeroes": true, 00:09:05.901 "zcopy": false, 00:09:05.901 "get_zone_info": false, 00:09:05.901 "zone_management": false, 00:09:05.901 "zone_append": false, 00:09:05.901 "compare": false, 00:09:05.901 "compare_and_write": false, 00:09:05.901 "abort": false, 00:09:05.902 "seek_hole": false, 00:09:05.902 "seek_data": false, 00:09:05.902 "copy": false, 00:09:05.902 "nvme_iov_md": false 00:09:05.902 }, 00:09:05.902 "memory_domains": [ 00:09:05.902 { 00:09:05.902 "dma_device_id": "system", 00:09:05.902 "dma_device_type": 1 00:09:05.902 }, 00:09:05.902 { 00:09:05.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.902 "dma_device_type": 2 00:09:05.902 }, 00:09:05.902 { 00:09:05.902 "dma_device_id": "system", 00:09:05.902 "dma_device_type": 1 00:09:05.902 }, 00:09:05.902 { 00:09:05.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.902 "dma_device_type": 2 00:09:05.902 } 00:09:05.902 ], 00:09:05.902 "driver_specific": { 00:09:05.902 "raid": { 00:09:05.902 "uuid": "0d5cc028-480d-4f9c-9e02-d538887795c9", 00:09:05.902 "strip_size_kb": 64, 00:09:05.902 "state": "online", 00:09:05.902 "raid_level": "concat", 00:09:05.902 "superblock": true, 00:09:05.902 "num_base_bdevs": 2, 00:09:05.902 "num_base_bdevs_discovered": 2, 00:09:05.902 "num_base_bdevs_operational": 2, 00:09:05.902 "base_bdevs_list": [ 00:09:05.902 { 00:09:05.902 "name": "BaseBdev1", 00:09:05.902 "uuid": "7a98a366-6663-4a8e-9a6e-9ee50f9be56d", 00:09:05.902 "is_configured": true, 00:09:05.902 "data_offset": 2048, 00:09:05.902 "data_size": 63488 00:09:05.902 }, 00:09:05.902 { 00:09:05.902 "name": "BaseBdev2", 00:09:05.902 "uuid": "d4e23c9c-d26a-472e-9204-4310f699780f", 00:09:05.902 "is_configured": true, 00:09:05.902 "data_offset": 2048, 00:09:05.902 "data_size": 63488 00:09:05.902 } 00:09:05.902 ] 00:09:05.902 } 00:09:05.902 } 00:09:05.902 }' 00:09:05.902 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:05.902 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:05.902 BaseBdev2' 00:09:05.902 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.902 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:05.902 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.902 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:05.902 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.902 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.902 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.902 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.902 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.902 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.902 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.902 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.902 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:05.902 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.902 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.902 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.902 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.902 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.902 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:05.902 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.902 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.902 [2024-11-20 14:19:44.849281] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:05.902 [2024-11-20 14:19:44.849323] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.902 [2024-11-20 14:19:44.849388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.161 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.161 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:06.161 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:06.161 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:06.161 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:06.161 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:06.161 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:06.161 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.161 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:06.161 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.161 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.161 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:06.161 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.161 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.161 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.161 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.161 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.161 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.161 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.161 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.161 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.161 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.161 "name": "Existed_Raid", 00:09:06.161 "uuid": "0d5cc028-480d-4f9c-9e02-d538887795c9", 00:09:06.161 "strip_size_kb": 64, 00:09:06.161 "state": "offline", 00:09:06.161 "raid_level": "concat", 00:09:06.161 "superblock": true, 00:09:06.161 "num_base_bdevs": 2, 00:09:06.161 "num_base_bdevs_discovered": 1, 00:09:06.161 "num_base_bdevs_operational": 1, 00:09:06.161 "base_bdevs_list": [ 00:09:06.161 { 00:09:06.161 "name": null, 00:09:06.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.161 "is_configured": false, 00:09:06.161 "data_offset": 0, 00:09:06.161 "data_size": 63488 00:09:06.161 }, 00:09:06.161 { 00:09:06.161 "name": "BaseBdev2", 00:09:06.161 "uuid": "d4e23c9c-d26a-472e-9204-4310f699780f", 00:09:06.161 "is_configured": true, 00:09:06.161 "data_offset": 2048, 00:09:06.161 "data_size": 63488 00:09:06.161 } 00:09:06.161 ] 00:09:06.161 }' 00:09:06.161 14:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.161 14:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.729 [2024-11-20 14:19:45.493614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:06.729 [2024-11-20 14:19:45.493679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61922 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61922 ']' 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61922 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61922 00:09:06.729 killing process with pid 61922 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61922' 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61922 00:09:06.729 [2024-11-20 14:19:45.665922] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:06.729 14:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61922 00:09:06.729 [2024-11-20 14:19:45.680498] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:08.106 14:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:08.106 00:09:08.106 real 0m5.460s 00:09:08.106 user 0m8.251s 00:09:08.106 sys 0m0.776s 00:09:08.106 14:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.106 14:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.106 ************************************ 00:09:08.106 END TEST raid_state_function_test_sb 00:09:08.106 ************************************ 00:09:08.106 14:19:46 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:09:08.106 14:19:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:08.106 14:19:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.106 14:19:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:08.106 ************************************ 00:09:08.106 START TEST raid_superblock_test 00:09:08.106 ************************************ 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62179 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62179 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62179 ']' 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.106 14:19:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.106 [2024-11-20 14:19:46.874544] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:09:08.106 [2024-11-20 14:19:46.874943] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62179 ] 00:09:08.106 [2024-11-20 14:19:47.059373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.364 [2024-11-20 14:19:47.189720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.622 [2024-11-20 14:19:47.391076] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.622 [2024-11-20 14:19:47.391153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.880 14:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.880 14:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:08.880 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:08.880 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:08.880 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:08.880 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:08.880 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:08.880 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:08.880 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:08.880 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:08.880 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:08.880 14:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.880 14:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.139 malloc1 00:09:09.139 14:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.139 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:09.139 14:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.139 14:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.139 [2024-11-20 14:19:47.879227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:09.139 [2024-11-20 14:19:47.879294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.139 [2024-11-20 14:19:47.879326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:09.139 [2024-11-20 14:19:47.879356] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.139 [2024-11-20 14:19:47.882140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.139 [2024-11-20 14:19:47.882183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:09.139 pt1 00:09:09.139 14:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.139 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:09.139 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:09.139 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:09.139 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:09.139 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:09.139 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.140 malloc2 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.140 [2024-11-20 14:19:47.930728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:09.140 [2024-11-20 14:19:47.930792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.140 [2024-11-20 14:19:47.930828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:09.140 [2024-11-20 14:19:47.930842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.140 [2024-11-20 14:19:47.933630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.140 [2024-11-20 14:19:47.933670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:09.140 pt2 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.140 [2024-11-20 14:19:47.938790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:09.140 [2024-11-20 14:19:47.941254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:09.140 [2024-11-20 14:19:47.941464] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:09.140 [2024-11-20 14:19:47.941484] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:09.140 [2024-11-20 14:19:47.941793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:09.140 [2024-11-20 14:19:47.942005] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:09.140 [2024-11-20 14:19:47.942027] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:09.140 [2024-11-20 14:19:47.942207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.140 "name": "raid_bdev1", 00:09:09.140 "uuid": "4d2da6a2-0bd8-47c0-b69b-aa58ae505c99", 00:09:09.140 "strip_size_kb": 64, 00:09:09.140 "state": "online", 00:09:09.140 "raid_level": "concat", 00:09:09.140 "superblock": true, 00:09:09.140 "num_base_bdevs": 2, 00:09:09.140 "num_base_bdevs_discovered": 2, 00:09:09.140 "num_base_bdevs_operational": 2, 00:09:09.140 "base_bdevs_list": [ 00:09:09.140 { 00:09:09.140 "name": "pt1", 00:09:09.140 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:09.140 "is_configured": true, 00:09:09.140 "data_offset": 2048, 00:09:09.140 "data_size": 63488 00:09:09.140 }, 00:09:09.140 { 00:09:09.140 "name": "pt2", 00:09:09.140 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:09.140 "is_configured": true, 00:09:09.140 "data_offset": 2048, 00:09:09.140 "data_size": 63488 00:09:09.140 } 00:09:09.140 ] 00:09:09.140 }' 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.140 14:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.708 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:09.708 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:09.708 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:09.708 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:09.708 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:09.708 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:09.708 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:09.708 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:09.708 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.708 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.708 [2024-11-20 14:19:48.455259] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.708 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.708 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:09.708 "name": "raid_bdev1", 00:09:09.708 "aliases": [ 00:09:09.708 "4d2da6a2-0bd8-47c0-b69b-aa58ae505c99" 00:09:09.708 ], 00:09:09.708 "product_name": "Raid Volume", 00:09:09.708 "block_size": 512, 00:09:09.708 "num_blocks": 126976, 00:09:09.708 "uuid": "4d2da6a2-0bd8-47c0-b69b-aa58ae505c99", 00:09:09.708 "assigned_rate_limits": { 00:09:09.708 "rw_ios_per_sec": 0, 00:09:09.708 "rw_mbytes_per_sec": 0, 00:09:09.708 "r_mbytes_per_sec": 0, 00:09:09.708 "w_mbytes_per_sec": 0 00:09:09.708 }, 00:09:09.708 "claimed": false, 00:09:09.708 "zoned": false, 00:09:09.708 "supported_io_types": { 00:09:09.708 "read": true, 00:09:09.708 "write": true, 00:09:09.708 "unmap": true, 00:09:09.708 "flush": true, 00:09:09.708 "reset": true, 00:09:09.708 "nvme_admin": false, 00:09:09.708 "nvme_io": false, 00:09:09.708 "nvme_io_md": false, 00:09:09.708 "write_zeroes": true, 00:09:09.708 "zcopy": false, 00:09:09.708 "get_zone_info": false, 00:09:09.708 "zone_management": false, 00:09:09.708 "zone_append": false, 00:09:09.708 "compare": false, 00:09:09.708 "compare_and_write": false, 00:09:09.708 "abort": false, 00:09:09.708 "seek_hole": false, 00:09:09.708 "seek_data": false, 00:09:09.708 "copy": false, 00:09:09.708 "nvme_iov_md": false 00:09:09.708 }, 00:09:09.708 "memory_domains": [ 00:09:09.708 { 00:09:09.708 "dma_device_id": "system", 00:09:09.708 "dma_device_type": 1 00:09:09.708 }, 00:09:09.708 { 00:09:09.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.708 "dma_device_type": 2 00:09:09.708 }, 00:09:09.708 { 00:09:09.708 "dma_device_id": "system", 00:09:09.709 "dma_device_type": 1 00:09:09.709 }, 00:09:09.709 { 00:09:09.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.709 "dma_device_type": 2 00:09:09.709 } 00:09:09.709 ], 00:09:09.709 "driver_specific": { 00:09:09.709 "raid": { 00:09:09.709 "uuid": "4d2da6a2-0bd8-47c0-b69b-aa58ae505c99", 00:09:09.709 "strip_size_kb": 64, 00:09:09.709 "state": "online", 00:09:09.709 "raid_level": "concat", 00:09:09.709 "superblock": true, 00:09:09.709 "num_base_bdevs": 2, 00:09:09.709 "num_base_bdevs_discovered": 2, 00:09:09.709 "num_base_bdevs_operational": 2, 00:09:09.709 "base_bdevs_list": [ 00:09:09.709 { 00:09:09.709 "name": "pt1", 00:09:09.709 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:09.709 "is_configured": true, 00:09:09.709 "data_offset": 2048, 00:09:09.709 "data_size": 63488 00:09:09.709 }, 00:09:09.709 { 00:09:09.709 "name": "pt2", 00:09:09.709 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:09.709 "is_configured": true, 00:09:09.709 "data_offset": 2048, 00:09:09.709 "data_size": 63488 00:09:09.709 } 00:09:09.709 ] 00:09:09.709 } 00:09:09.709 } 00:09:09.709 }' 00:09:09.709 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:09.709 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:09.709 pt2' 00:09:09.709 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.709 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:09.709 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.709 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:09.709 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.709 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.709 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.709 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.709 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.709 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.709 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.709 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:09.709 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.709 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.709 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.709 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.970 [2024-11-20 14:19:48.715281] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4d2da6a2-0bd8-47c0-b69b-aa58ae505c99 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4d2da6a2-0bd8-47c0-b69b-aa58ae505c99 ']' 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.970 [2024-11-20 14:19:48.754907] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:09.970 [2024-11-20 14:19:48.754937] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.970 [2024-11-20 14:19:48.755067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.970 [2024-11-20 14:19:48.755131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:09.970 [2024-11-20 14:19:48.755150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:09.970 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.971 [2024-11-20 14:19:48.895022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:09.971 [2024-11-20 14:19:48.897506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:09.971 [2024-11-20 14:19:48.897596] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:09.971 [2024-11-20 14:19:48.897667] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:09.971 [2024-11-20 14:19:48.897692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:09.971 [2024-11-20 14:19:48.897707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:09.971 request: 00:09:09.971 { 00:09:09.971 "name": "raid_bdev1", 00:09:09.971 "raid_level": "concat", 00:09:09.971 "base_bdevs": [ 00:09:09.971 "malloc1", 00:09:09.971 "malloc2" 00:09:09.971 ], 00:09:09.971 "strip_size_kb": 64, 00:09:09.971 "superblock": false, 00:09:09.971 "method": "bdev_raid_create", 00:09:09.971 "req_id": 1 00:09:09.971 } 00:09:09.971 Got JSON-RPC error response 00:09:09.971 response: 00:09:09.971 { 00:09:09.971 "code": -17, 00:09:09.971 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:09.971 } 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.971 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.231 [2024-11-20 14:19:48.950990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:10.231 [2024-11-20 14:19:48.951060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.231 [2024-11-20 14:19:48.951084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:10.231 [2024-11-20 14:19:48.951100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.231 [2024-11-20 14:19:48.953879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.231 [2024-11-20 14:19:48.953937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:10.231 [2024-11-20 14:19:48.954040] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:10.231 [2024-11-20 14:19:48.954112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:10.231 pt1 00:09:10.231 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.231 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:09:10.231 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.231 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.231 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.231 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.231 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:10.231 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.231 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.231 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.231 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.231 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.231 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.231 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.231 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.231 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.231 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.231 "name": "raid_bdev1", 00:09:10.231 "uuid": "4d2da6a2-0bd8-47c0-b69b-aa58ae505c99", 00:09:10.231 "strip_size_kb": 64, 00:09:10.231 "state": "configuring", 00:09:10.231 "raid_level": "concat", 00:09:10.231 "superblock": true, 00:09:10.231 "num_base_bdevs": 2, 00:09:10.231 "num_base_bdevs_discovered": 1, 00:09:10.231 "num_base_bdevs_operational": 2, 00:09:10.231 "base_bdevs_list": [ 00:09:10.231 { 00:09:10.231 "name": "pt1", 00:09:10.231 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:10.231 "is_configured": true, 00:09:10.231 "data_offset": 2048, 00:09:10.231 "data_size": 63488 00:09:10.231 }, 00:09:10.231 { 00:09:10.231 "name": null, 00:09:10.231 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:10.231 "is_configured": false, 00:09:10.231 "data_offset": 2048, 00:09:10.231 "data_size": 63488 00:09:10.231 } 00:09:10.231 ] 00:09:10.231 }' 00:09:10.231 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.231 14:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.796 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:10.796 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:10.796 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:10.796 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:10.796 14:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.796 14:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.796 [2024-11-20 14:19:49.475180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:10.796 [2024-11-20 14:19:49.475264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.796 [2024-11-20 14:19:49.475295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:10.796 [2024-11-20 14:19:49.475311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.796 [2024-11-20 14:19:49.475863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.796 [2024-11-20 14:19:49.475899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:10.796 [2024-11-20 14:19:49.476021] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:10.796 [2024-11-20 14:19:49.476063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:10.796 [2024-11-20 14:19:49.476204] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:10.796 [2024-11-20 14:19:49.476225] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:10.796 [2024-11-20 14:19:49.476521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:10.796 [2024-11-20 14:19:49.476689] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:10.796 [2024-11-20 14:19:49.476721] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:10.796 [2024-11-20 14:19:49.476883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.796 pt2 00:09:10.796 14:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.796 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:10.796 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:10.796 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:10.796 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.796 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.796 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.796 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.796 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:10.796 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.797 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.797 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.797 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.797 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.797 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.797 14:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.797 14:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.797 14:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.797 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.797 "name": "raid_bdev1", 00:09:10.797 "uuid": "4d2da6a2-0bd8-47c0-b69b-aa58ae505c99", 00:09:10.797 "strip_size_kb": 64, 00:09:10.797 "state": "online", 00:09:10.797 "raid_level": "concat", 00:09:10.797 "superblock": true, 00:09:10.797 "num_base_bdevs": 2, 00:09:10.797 "num_base_bdevs_discovered": 2, 00:09:10.797 "num_base_bdevs_operational": 2, 00:09:10.797 "base_bdevs_list": [ 00:09:10.797 { 00:09:10.797 "name": "pt1", 00:09:10.797 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:10.797 "is_configured": true, 00:09:10.797 "data_offset": 2048, 00:09:10.797 "data_size": 63488 00:09:10.797 }, 00:09:10.797 { 00:09:10.797 "name": "pt2", 00:09:10.797 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:10.797 "is_configured": true, 00:09:10.797 "data_offset": 2048, 00:09:10.797 "data_size": 63488 00:09:10.797 } 00:09:10.797 ] 00:09:10.797 }' 00:09:10.797 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.797 14:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.055 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:11.055 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:11.055 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:11.055 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:11.055 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:11.055 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:11.055 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:11.055 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:11.055 14:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.055 14:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.055 [2024-11-20 14:19:49.971618] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.055 14:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.055 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:11.055 "name": "raid_bdev1", 00:09:11.055 "aliases": [ 00:09:11.055 "4d2da6a2-0bd8-47c0-b69b-aa58ae505c99" 00:09:11.055 ], 00:09:11.055 "product_name": "Raid Volume", 00:09:11.055 "block_size": 512, 00:09:11.055 "num_blocks": 126976, 00:09:11.055 "uuid": "4d2da6a2-0bd8-47c0-b69b-aa58ae505c99", 00:09:11.055 "assigned_rate_limits": { 00:09:11.055 "rw_ios_per_sec": 0, 00:09:11.055 "rw_mbytes_per_sec": 0, 00:09:11.055 "r_mbytes_per_sec": 0, 00:09:11.055 "w_mbytes_per_sec": 0 00:09:11.055 }, 00:09:11.055 "claimed": false, 00:09:11.055 "zoned": false, 00:09:11.055 "supported_io_types": { 00:09:11.055 "read": true, 00:09:11.055 "write": true, 00:09:11.055 "unmap": true, 00:09:11.055 "flush": true, 00:09:11.055 "reset": true, 00:09:11.055 "nvme_admin": false, 00:09:11.055 "nvme_io": false, 00:09:11.055 "nvme_io_md": false, 00:09:11.055 "write_zeroes": true, 00:09:11.055 "zcopy": false, 00:09:11.055 "get_zone_info": false, 00:09:11.055 "zone_management": false, 00:09:11.055 "zone_append": false, 00:09:11.055 "compare": false, 00:09:11.055 "compare_and_write": false, 00:09:11.055 "abort": false, 00:09:11.055 "seek_hole": false, 00:09:11.055 "seek_data": false, 00:09:11.055 "copy": false, 00:09:11.055 "nvme_iov_md": false 00:09:11.055 }, 00:09:11.055 "memory_domains": [ 00:09:11.055 { 00:09:11.055 "dma_device_id": "system", 00:09:11.055 "dma_device_type": 1 00:09:11.055 }, 00:09:11.055 { 00:09:11.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.055 "dma_device_type": 2 00:09:11.055 }, 00:09:11.055 { 00:09:11.055 "dma_device_id": "system", 00:09:11.055 "dma_device_type": 1 00:09:11.055 }, 00:09:11.055 { 00:09:11.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.055 "dma_device_type": 2 00:09:11.055 } 00:09:11.055 ], 00:09:11.055 "driver_specific": { 00:09:11.055 "raid": { 00:09:11.055 "uuid": "4d2da6a2-0bd8-47c0-b69b-aa58ae505c99", 00:09:11.055 "strip_size_kb": 64, 00:09:11.055 "state": "online", 00:09:11.055 "raid_level": "concat", 00:09:11.055 "superblock": true, 00:09:11.055 "num_base_bdevs": 2, 00:09:11.055 "num_base_bdevs_discovered": 2, 00:09:11.055 "num_base_bdevs_operational": 2, 00:09:11.055 "base_bdevs_list": [ 00:09:11.055 { 00:09:11.055 "name": "pt1", 00:09:11.055 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:11.055 "is_configured": true, 00:09:11.055 "data_offset": 2048, 00:09:11.055 "data_size": 63488 00:09:11.055 }, 00:09:11.055 { 00:09:11.055 "name": "pt2", 00:09:11.055 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:11.055 "is_configured": true, 00:09:11.055 "data_offset": 2048, 00:09:11.055 "data_size": 63488 00:09:11.055 } 00:09:11.055 ] 00:09:11.056 } 00:09:11.056 } 00:09:11.056 }' 00:09:11.056 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:11.314 pt2' 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:11.314 [2024-11-20 14:19:50.211668] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4d2da6a2-0bd8-47c0-b69b-aa58ae505c99 '!=' 4d2da6a2-0bd8-47c0-b69b-aa58ae505c99 ']' 00:09:11.314 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:11.315 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.315 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:11.315 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62179 00:09:11.315 14:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62179 ']' 00:09:11.315 14:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62179 00:09:11.315 14:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:11.315 14:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.315 14:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62179 00:09:11.315 killing process with pid 62179 00:09:11.315 14:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.315 14:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.315 14:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62179' 00:09:11.315 14:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62179 00:09:11.315 [2024-11-20 14:19:50.291335] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:11.315 14:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62179 00:09:11.315 [2024-11-20 14:19:50.291452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.315 [2024-11-20 14:19:50.291515] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:11.315 [2024-11-20 14:19:50.291536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:11.572 [2024-11-20 14:19:50.476327] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.949 14:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:12.949 00:09:12.949 real 0m4.750s 00:09:12.949 user 0m6.963s 00:09:12.949 sys 0m0.715s 00:09:12.949 14:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.949 14:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.949 ************************************ 00:09:12.949 END TEST raid_superblock_test 00:09:12.949 ************************************ 00:09:12.949 14:19:51 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:09:12.949 14:19:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:12.949 14:19:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.949 14:19:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:12.949 ************************************ 00:09:12.949 START TEST raid_read_error_test 00:09:12.949 ************************************ 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gpPCCfT6cS 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62391 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62391 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62391 ']' 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.949 14:19:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.949 [2024-11-20 14:19:51.677502] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:09:12.949 [2024-11-20 14:19:51.677670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62391 ] 00:09:12.949 [2024-11-20 14:19:51.850194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.208 [2024-11-20 14:19:51.981040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.467 [2024-11-20 14:19:52.200628] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.467 [2024-11-20 14:19:52.200699] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.726 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.726 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:13.726 14:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:13.726 14:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:13.726 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.726 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.726 BaseBdev1_malloc 00:09:13.726 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.726 14:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:13.726 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.726 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.726 true 00:09:13.726 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.726 14:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:13.726 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.726 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.726 [2024-11-20 14:19:52.673801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:13.726 [2024-11-20 14:19:52.673882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.726 [2024-11-20 14:19:52.673912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:13.726 [2024-11-20 14:19:52.673929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.726 [2024-11-20 14:19:52.676753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.726 [2024-11-20 14:19:52.676821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:13.726 BaseBdev1 00:09:13.727 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.727 14:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:13.727 14:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:13.727 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.727 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.985 BaseBdev2_malloc 00:09:13.985 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.985 14:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:13.985 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.985 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.985 true 00:09:13.985 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.985 14:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:13.985 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.985 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.985 [2024-11-20 14:19:52.728917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:13.985 [2024-11-20 14:19:52.729033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.985 [2024-11-20 14:19:52.729062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:13.985 [2024-11-20 14:19:52.729079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.986 [2024-11-20 14:19:52.731903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.986 [2024-11-20 14:19:52.731967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:13.986 BaseBdev2 00:09:13.986 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.986 14:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:13.986 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.986 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.986 [2024-11-20 14:19:52.736980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.986 [2024-11-20 14:19:52.739514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:13.986 [2024-11-20 14:19:52.739809] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:13.986 [2024-11-20 14:19:52.739833] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:13.986 [2024-11-20 14:19:52.740194] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:13.986 [2024-11-20 14:19:52.740411] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:13.986 [2024-11-20 14:19:52.740432] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:13.986 [2024-11-20 14:19:52.740628] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.986 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.986 14:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:13.986 14:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:13.986 14:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:13.986 14:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.986 14:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.986 14:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:13.986 14:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.986 14:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.986 14:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.986 14:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.986 14:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:13.986 14:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.986 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.986 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.986 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.986 14:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.986 "name": "raid_bdev1", 00:09:13.986 "uuid": "ddef8ccd-91e8-4b42-a076-2d0d0fb1ef9b", 00:09:13.986 "strip_size_kb": 64, 00:09:13.986 "state": "online", 00:09:13.986 "raid_level": "concat", 00:09:13.986 "superblock": true, 00:09:13.986 "num_base_bdevs": 2, 00:09:13.986 "num_base_bdevs_discovered": 2, 00:09:13.986 "num_base_bdevs_operational": 2, 00:09:13.986 "base_bdevs_list": [ 00:09:13.986 { 00:09:13.986 "name": "BaseBdev1", 00:09:13.986 "uuid": "c007f0cf-8577-5d70-9c08-6a8575ced7c3", 00:09:13.986 "is_configured": true, 00:09:13.986 "data_offset": 2048, 00:09:13.986 "data_size": 63488 00:09:13.986 }, 00:09:13.986 { 00:09:13.986 "name": "BaseBdev2", 00:09:13.986 "uuid": "988ef602-524f-5151-9099-ae9b294541c8", 00:09:13.986 "is_configured": true, 00:09:13.986 "data_offset": 2048, 00:09:13.986 "data_size": 63488 00:09:13.986 } 00:09:13.986 ] 00:09:13.986 }' 00:09:13.986 14:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.986 14:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.552 14:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:14.552 14:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:14.552 [2024-11-20 14:19:53.370540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.485 "name": "raid_bdev1", 00:09:15.485 "uuid": "ddef8ccd-91e8-4b42-a076-2d0d0fb1ef9b", 00:09:15.485 "strip_size_kb": 64, 00:09:15.485 "state": "online", 00:09:15.485 "raid_level": "concat", 00:09:15.485 "superblock": true, 00:09:15.485 "num_base_bdevs": 2, 00:09:15.485 "num_base_bdevs_discovered": 2, 00:09:15.485 "num_base_bdevs_operational": 2, 00:09:15.485 "base_bdevs_list": [ 00:09:15.485 { 00:09:15.485 "name": "BaseBdev1", 00:09:15.485 "uuid": "c007f0cf-8577-5d70-9c08-6a8575ced7c3", 00:09:15.485 "is_configured": true, 00:09:15.485 "data_offset": 2048, 00:09:15.485 "data_size": 63488 00:09:15.485 }, 00:09:15.485 { 00:09:15.485 "name": "BaseBdev2", 00:09:15.485 "uuid": "988ef602-524f-5151-9099-ae9b294541c8", 00:09:15.485 "is_configured": true, 00:09:15.485 "data_offset": 2048, 00:09:15.485 "data_size": 63488 00:09:15.485 } 00:09:15.485 ] 00:09:15.485 }' 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.485 14:19:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.050 14:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:16.050 14:19:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.050 14:19:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.050 [2024-11-20 14:19:54.785270] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:16.050 [2024-11-20 14:19:54.785317] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.050 [2024-11-20 14:19:54.788621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.050 [2024-11-20 14:19:54.788687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.050 [2024-11-20 14:19:54.788737] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.050 [2024-11-20 14:19:54.788755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:16.050 { 00:09:16.050 "results": [ 00:09:16.050 { 00:09:16.050 "job": "raid_bdev1", 00:09:16.050 "core_mask": "0x1", 00:09:16.050 "workload": "randrw", 00:09:16.050 "percentage": 50, 00:09:16.050 "status": "finished", 00:09:16.050 "queue_depth": 1, 00:09:16.050 "io_size": 131072, 00:09:16.050 "runtime": 1.412335, 00:09:16.050 "iops": 11145.372733806073, 00:09:16.050 "mibps": 1393.171591725759, 00:09:16.050 "io_failed": 1, 00:09:16.050 "io_timeout": 0, 00:09:16.050 "avg_latency_us": 124.5331811829385, 00:09:16.050 "min_latency_us": 39.33090909090909, 00:09:16.050 "max_latency_us": 1861.8181818181818 00:09:16.050 } 00:09:16.050 ], 00:09:16.050 "core_count": 1 00:09:16.050 } 00:09:16.050 14:19:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.050 14:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62391 00:09:16.050 14:19:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62391 ']' 00:09:16.050 14:19:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62391 00:09:16.050 14:19:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:16.050 14:19:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.050 14:19:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62391 00:09:16.050 14:19:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.050 14:19:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.050 killing process with pid 62391 00:09:16.050 14:19:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62391' 00:09:16.051 14:19:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62391 00:09:16.051 [2024-11-20 14:19:54.828075] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:16.051 14:19:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62391 00:09:16.051 [2024-11-20 14:19:54.950933] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:17.423 14:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:17.423 14:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gpPCCfT6cS 00:09:17.423 14:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:17.423 14:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:17.423 14:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:17.423 14:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:17.423 14:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:17.423 14:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:17.423 00:09:17.423 real 0m4.473s 00:09:17.423 user 0m5.566s 00:09:17.423 sys 0m0.561s 00:09:17.423 14:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.423 14:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.423 ************************************ 00:09:17.423 END TEST raid_read_error_test 00:09:17.423 ************************************ 00:09:17.423 14:19:56 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:09:17.423 14:19:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:17.423 14:19:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.423 14:19:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:17.423 ************************************ 00:09:17.423 START TEST raid_write_error_test 00:09:17.423 ************************************ 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4FOkk1a0Lo 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62531 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62531 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62531 ']' 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.423 14:19:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.423 [2024-11-20 14:19:56.205842] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:09:17.423 [2024-11-20 14:19:56.206099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62531 ] 00:09:17.423 [2024-11-20 14:19:56.387724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.682 [2024-11-20 14:19:56.515546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.940 [2024-11-20 14:19:56.765822] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.940 [2024-11-20 14:19:56.765908] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.511 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.511 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:18.511 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.511 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:18.511 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.511 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.511 BaseBdev1_malloc 00:09:18.511 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.511 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:18.511 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.511 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.511 true 00:09:18.511 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.511 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:18.511 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.511 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.511 [2024-11-20 14:19:57.299097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:18.511 [2024-11-20 14:19:57.299164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.511 [2024-11-20 14:19:57.299193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:18.511 [2024-11-20 14:19:57.299211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.511 [2024-11-20 14:19:57.301999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.511 [2024-11-20 14:19:57.302065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:18.511 BaseBdev1 00:09:18.511 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.511 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.511 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:18.511 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.511 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.511 BaseBdev2_malloc 00:09:18.511 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.511 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:18.511 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.511 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.512 true 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.512 [2024-11-20 14:19:57.363271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:18.512 [2024-11-20 14:19:57.363341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.512 [2024-11-20 14:19:57.363367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:18.512 [2024-11-20 14:19:57.363385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.512 [2024-11-20 14:19:57.366199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.512 [2024-11-20 14:19:57.366247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:18.512 BaseBdev2 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.512 [2024-11-20 14:19:57.371358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.512 [2024-11-20 14:19:57.373842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:18.512 [2024-11-20 14:19:57.374118] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:18.512 [2024-11-20 14:19:57.374154] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:18.512 [2024-11-20 14:19:57.374497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:18.512 [2024-11-20 14:19:57.374756] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:18.512 [2024-11-20 14:19:57.374792] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:18.512 [2024-11-20 14:19:57.375070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.512 "name": "raid_bdev1", 00:09:18.512 "uuid": "ccaa1574-2223-4592-8fa9-dc864fc9193f", 00:09:18.512 "strip_size_kb": 64, 00:09:18.512 "state": "online", 00:09:18.512 "raid_level": "concat", 00:09:18.512 "superblock": true, 00:09:18.512 "num_base_bdevs": 2, 00:09:18.512 "num_base_bdevs_discovered": 2, 00:09:18.512 "num_base_bdevs_operational": 2, 00:09:18.512 "base_bdevs_list": [ 00:09:18.512 { 00:09:18.512 "name": "BaseBdev1", 00:09:18.512 "uuid": "fc2829f2-6fef-5d2f-a1d6-9c6be1b3415d", 00:09:18.512 "is_configured": true, 00:09:18.512 "data_offset": 2048, 00:09:18.512 "data_size": 63488 00:09:18.512 }, 00:09:18.512 { 00:09:18.512 "name": "BaseBdev2", 00:09:18.512 "uuid": "65e8ca8e-3d5e-5e71-83d8-11196c8ddd99", 00:09:18.512 "is_configured": true, 00:09:18.512 "data_offset": 2048, 00:09:18.512 "data_size": 63488 00:09:18.512 } 00:09:18.512 ] 00:09:18.512 }' 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.512 14:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.079 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:19.079 14:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:19.079 [2024-11-20 14:19:57.948840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.013 "name": "raid_bdev1", 00:09:20.013 "uuid": "ccaa1574-2223-4592-8fa9-dc864fc9193f", 00:09:20.013 "strip_size_kb": 64, 00:09:20.013 "state": "online", 00:09:20.013 "raid_level": "concat", 00:09:20.013 "superblock": true, 00:09:20.013 "num_base_bdevs": 2, 00:09:20.013 "num_base_bdevs_discovered": 2, 00:09:20.013 "num_base_bdevs_operational": 2, 00:09:20.013 "base_bdevs_list": [ 00:09:20.013 { 00:09:20.013 "name": "BaseBdev1", 00:09:20.013 "uuid": "fc2829f2-6fef-5d2f-a1d6-9c6be1b3415d", 00:09:20.013 "is_configured": true, 00:09:20.013 "data_offset": 2048, 00:09:20.013 "data_size": 63488 00:09:20.013 }, 00:09:20.013 { 00:09:20.013 "name": "BaseBdev2", 00:09:20.013 "uuid": "65e8ca8e-3d5e-5e71-83d8-11196c8ddd99", 00:09:20.013 "is_configured": true, 00:09:20.013 "data_offset": 2048, 00:09:20.013 "data_size": 63488 00:09:20.013 } 00:09:20.013 ] 00:09:20.013 }' 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.013 14:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.579 14:19:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:20.579 14:19:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.579 14:19:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.579 [2024-11-20 14:19:59.331929] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.579 [2024-11-20 14:19:59.331977] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.579 [2024-11-20 14:19:59.335348] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.579 [2024-11-20 14:19:59.335427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.579 [2024-11-20 14:19:59.335471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.579 [2024-11-20 14:19:59.335489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:20.579 { 00:09:20.579 "results": [ 00:09:20.579 { 00:09:20.579 "job": "raid_bdev1", 00:09:20.579 "core_mask": "0x1", 00:09:20.579 "workload": "randrw", 00:09:20.579 "percentage": 50, 00:09:20.579 "status": "finished", 00:09:20.579 "queue_depth": 1, 00:09:20.579 "io_size": 131072, 00:09:20.579 "runtime": 1.380622, 00:09:20.579 "iops": 11116.004235771992, 00:09:20.579 "mibps": 1389.500529471499, 00:09:20.579 "io_failed": 1, 00:09:20.579 "io_timeout": 0, 00:09:20.579 "avg_latency_us": 125.01044471296231, 00:09:20.579 "min_latency_us": 39.09818181818182, 00:09:20.579 "max_latency_us": 1861.8181818181818 00:09:20.579 } 00:09:20.579 ], 00:09:20.579 "core_count": 1 00:09:20.579 } 00:09:20.579 14:19:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.579 14:19:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62531 00:09:20.579 14:19:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62531 ']' 00:09:20.579 14:19:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62531 00:09:20.579 14:19:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:20.579 14:19:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.579 14:19:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62531 00:09:20.579 14:19:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.579 14:19:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.579 killing process with pid 62531 00:09:20.579 14:19:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62531' 00:09:20.579 14:19:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62531 00:09:20.579 [2024-11-20 14:19:59.369604] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:20.579 14:19:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62531 00:09:20.579 [2024-11-20 14:19:59.491191] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:21.987 14:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4FOkk1a0Lo 00:09:21.987 14:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:21.987 14:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:21.987 14:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:21.987 14:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:21.987 14:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:21.987 14:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:21.987 14:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:21.987 00:09:21.987 real 0m4.515s 00:09:21.987 user 0m5.614s 00:09:21.987 sys 0m0.580s 00:09:21.987 14:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.987 14:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.987 ************************************ 00:09:21.987 END TEST raid_write_error_test 00:09:21.987 ************************************ 00:09:21.987 14:20:00 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:21.987 14:20:00 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:09:21.987 14:20:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:21.987 14:20:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.987 14:20:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:21.987 ************************************ 00:09:21.987 START TEST raid_state_function_test 00:09:21.987 ************************************ 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62680 00:09:21.987 Process raid pid: 62680 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62680' 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62680 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62680 ']' 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.987 14:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.987 [2024-11-20 14:20:00.798105] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:09:21.987 [2024-11-20 14:20:00.798283] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.246 [2024-11-20 14:20:00.986116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.246 [2024-11-20 14:20:01.117091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.504 [2024-11-20 14:20:01.324656] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.504 [2024-11-20 14:20:01.324710] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.071 14:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.071 14:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:23.071 14:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:23.071 14:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.071 14:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.071 [2024-11-20 14:20:01.863261] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:23.071 [2024-11-20 14:20:01.863340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:23.071 [2024-11-20 14:20:01.863359] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:23.071 [2024-11-20 14:20:01.863376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:23.071 14:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.071 14:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:23.071 14:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.071 14:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.071 14:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.071 14:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.071 14:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:23.071 14:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.071 14:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.071 14:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.071 14:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.071 14:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.071 14:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.071 14:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.071 14:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.071 14:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.071 14:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.071 "name": "Existed_Raid", 00:09:23.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.071 "strip_size_kb": 0, 00:09:23.071 "state": "configuring", 00:09:23.071 "raid_level": "raid1", 00:09:23.071 "superblock": false, 00:09:23.071 "num_base_bdevs": 2, 00:09:23.071 "num_base_bdevs_discovered": 0, 00:09:23.071 "num_base_bdevs_operational": 2, 00:09:23.071 "base_bdevs_list": [ 00:09:23.071 { 00:09:23.071 "name": "BaseBdev1", 00:09:23.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.071 "is_configured": false, 00:09:23.071 "data_offset": 0, 00:09:23.071 "data_size": 0 00:09:23.071 }, 00:09:23.071 { 00:09:23.071 "name": "BaseBdev2", 00:09:23.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.071 "is_configured": false, 00:09:23.071 "data_offset": 0, 00:09:23.071 "data_size": 0 00:09:23.071 } 00:09:23.071 ] 00:09:23.071 }' 00:09:23.071 14:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.071 14:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.639 [2024-11-20 14:20:02.407415] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:23.639 [2024-11-20 14:20:02.407466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.639 [2024-11-20 14:20:02.415371] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:23.639 [2024-11-20 14:20:02.415437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:23.639 [2024-11-20 14:20:02.415457] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:23.639 [2024-11-20 14:20:02.415481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.639 [2024-11-20 14:20:02.467621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.639 BaseBdev1 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.639 [ 00:09:23.639 { 00:09:23.639 "name": "BaseBdev1", 00:09:23.639 "aliases": [ 00:09:23.639 "b4290409-6a03-4790-8177-48cb553b59eb" 00:09:23.639 ], 00:09:23.639 "product_name": "Malloc disk", 00:09:23.639 "block_size": 512, 00:09:23.639 "num_blocks": 65536, 00:09:23.639 "uuid": "b4290409-6a03-4790-8177-48cb553b59eb", 00:09:23.639 "assigned_rate_limits": { 00:09:23.639 "rw_ios_per_sec": 0, 00:09:23.639 "rw_mbytes_per_sec": 0, 00:09:23.639 "r_mbytes_per_sec": 0, 00:09:23.639 "w_mbytes_per_sec": 0 00:09:23.639 }, 00:09:23.639 "claimed": true, 00:09:23.639 "claim_type": "exclusive_write", 00:09:23.639 "zoned": false, 00:09:23.639 "supported_io_types": { 00:09:23.639 "read": true, 00:09:23.639 "write": true, 00:09:23.639 "unmap": true, 00:09:23.639 "flush": true, 00:09:23.639 "reset": true, 00:09:23.639 "nvme_admin": false, 00:09:23.639 "nvme_io": false, 00:09:23.639 "nvme_io_md": false, 00:09:23.639 "write_zeroes": true, 00:09:23.639 "zcopy": true, 00:09:23.639 "get_zone_info": false, 00:09:23.639 "zone_management": false, 00:09:23.639 "zone_append": false, 00:09:23.639 "compare": false, 00:09:23.639 "compare_and_write": false, 00:09:23.639 "abort": true, 00:09:23.639 "seek_hole": false, 00:09:23.639 "seek_data": false, 00:09:23.639 "copy": true, 00:09:23.639 "nvme_iov_md": false 00:09:23.639 }, 00:09:23.639 "memory_domains": [ 00:09:23.639 { 00:09:23.639 "dma_device_id": "system", 00:09:23.639 "dma_device_type": 1 00:09:23.639 }, 00:09:23.639 { 00:09:23.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.639 "dma_device_type": 2 00:09:23.639 } 00:09:23.639 ], 00:09:23.639 "driver_specific": {} 00:09:23.639 } 00:09:23.639 ] 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.639 14:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.639 "name": "Existed_Raid", 00:09:23.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.640 "strip_size_kb": 0, 00:09:23.640 "state": "configuring", 00:09:23.640 "raid_level": "raid1", 00:09:23.640 "superblock": false, 00:09:23.640 "num_base_bdevs": 2, 00:09:23.640 "num_base_bdevs_discovered": 1, 00:09:23.640 "num_base_bdevs_operational": 2, 00:09:23.640 "base_bdevs_list": [ 00:09:23.640 { 00:09:23.640 "name": "BaseBdev1", 00:09:23.640 "uuid": "b4290409-6a03-4790-8177-48cb553b59eb", 00:09:23.640 "is_configured": true, 00:09:23.640 "data_offset": 0, 00:09:23.640 "data_size": 65536 00:09:23.640 }, 00:09:23.640 { 00:09:23.640 "name": "BaseBdev2", 00:09:23.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.640 "is_configured": false, 00:09:23.640 "data_offset": 0, 00:09:23.640 "data_size": 0 00:09:23.640 } 00:09:23.640 ] 00:09:23.640 }' 00:09:23.640 14:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.640 14:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.207 [2024-11-20 14:20:03.007804] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:24.207 [2024-11-20 14:20:03.008015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.207 [2024-11-20 14:20:03.015826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.207 [2024-11-20 14:20:03.018235] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:24.207 [2024-11-20 14:20:03.018416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.207 "name": "Existed_Raid", 00:09:24.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.207 "strip_size_kb": 0, 00:09:24.207 "state": "configuring", 00:09:24.207 "raid_level": "raid1", 00:09:24.207 "superblock": false, 00:09:24.207 "num_base_bdevs": 2, 00:09:24.207 "num_base_bdevs_discovered": 1, 00:09:24.207 "num_base_bdevs_operational": 2, 00:09:24.207 "base_bdevs_list": [ 00:09:24.207 { 00:09:24.207 "name": "BaseBdev1", 00:09:24.207 "uuid": "b4290409-6a03-4790-8177-48cb553b59eb", 00:09:24.207 "is_configured": true, 00:09:24.207 "data_offset": 0, 00:09:24.207 "data_size": 65536 00:09:24.207 }, 00:09:24.207 { 00:09:24.207 "name": "BaseBdev2", 00:09:24.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.207 "is_configured": false, 00:09:24.207 "data_offset": 0, 00:09:24.207 "data_size": 0 00:09:24.207 } 00:09:24.207 ] 00:09:24.207 }' 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.207 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.775 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:24.775 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.775 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.775 [2024-11-20 14:20:03.574162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.775 [2024-11-20 14:20:03.574233] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:24.775 [2024-11-20 14:20:03.574247] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:24.775 [2024-11-20 14:20:03.574568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:24.775 [2024-11-20 14:20:03.574797] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:24.775 [2024-11-20 14:20:03.574818] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:24.775 [2024-11-20 14:20:03.575178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.775 BaseBdev2 00:09:24.775 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.775 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:24.775 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:24.775 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.775 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.776 [ 00:09:24.776 { 00:09:24.776 "name": "BaseBdev2", 00:09:24.776 "aliases": [ 00:09:24.776 "6131ab08-bc55-44a7-89f3-eb15314cc73f" 00:09:24.776 ], 00:09:24.776 "product_name": "Malloc disk", 00:09:24.776 "block_size": 512, 00:09:24.776 "num_blocks": 65536, 00:09:24.776 "uuid": "6131ab08-bc55-44a7-89f3-eb15314cc73f", 00:09:24.776 "assigned_rate_limits": { 00:09:24.776 "rw_ios_per_sec": 0, 00:09:24.776 "rw_mbytes_per_sec": 0, 00:09:24.776 "r_mbytes_per_sec": 0, 00:09:24.776 "w_mbytes_per_sec": 0 00:09:24.776 }, 00:09:24.776 "claimed": true, 00:09:24.776 "claim_type": "exclusive_write", 00:09:24.776 "zoned": false, 00:09:24.776 "supported_io_types": { 00:09:24.776 "read": true, 00:09:24.776 "write": true, 00:09:24.776 "unmap": true, 00:09:24.776 "flush": true, 00:09:24.776 "reset": true, 00:09:24.776 "nvme_admin": false, 00:09:24.776 "nvme_io": false, 00:09:24.776 "nvme_io_md": false, 00:09:24.776 "write_zeroes": true, 00:09:24.776 "zcopy": true, 00:09:24.776 "get_zone_info": false, 00:09:24.776 "zone_management": false, 00:09:24.776 "zone_append": false, 00:09:24.776 "compare": false, 00:09:24.776 "compare_and_write": false, 00:09:24.776 "abort": true, 00:09:24.776 "seek_hole": false, 00:09:24.776 "seek_data": false, 00:09:24.776 "copy": true, 00:09:24.776 "nvme_iov_md": false 00:09:24.776 }, 00:09:24.776 "memory_domains": [ 00:09:24.776 { 00:09:24.776 "dma_device_id": "system", 00:09:24.776 "dma_device_type": 1 00:09:24.776 }, 00:09:24.776 { 00:09:24.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.776 "dma_device_type": 2 00:09:24.776 } 00:09:24.776 ], 00:09:24.776 "driver_specific": {} 00:09:24.776 } 00:09:24.776 ] 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.776 "name": "Existed_Raid", 00:09:24.776 "uuid": "dc705565-5c9b-4a0a-bced-ee82b90a84dc", 00:09:24.776 "strip_size_kb": 0, 00:09:24.776 "state": "online", 00:09:24.776 "raid_level": "raid1", 00:09:24.776 "superblock": false, 00:09:24.776 "num_base_bdevs": 2, 00:09:24.776 "num_base_bdevs_discovered": 2, 00:09:24.776 "num_base_bdevs_operational": 2, 00:09:24.776 "base_bdevs_list": [ 00:09:24.776 { 00:09:24.776 "name": "BaseBdev1", 00:09:24.776 "uuid": "b4290409-6a03-4790-8177-48cb553b59eb", 00:09:24.776 "is_configured": true, 00:09:24.776 "data_offset": 0, 00:09:24.776 "data_size": 65536 00:09:24.776 }, 00:09:24.776 { 00:09:24.776 "name": "BaseBdev2", 00:09:24.776 "uuid": "6131ab08-bc55-44a7-89f3-eb15314cc73f", 00:09:24.776 "is_configured": true, 00:09:24.776 "data_offset": 0, 00:09:24.776 "data_size": 65536 00:09:24.776 } 00:09:24.776 ] 00:09:24.776 }' 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.776 14:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.343 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:25.343 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:25.343 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:25.343 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:25.343 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:25.343 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:25.343 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:25.343 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:25.343 14:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.343 14:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.343 [2024-11-20 14:20:04.150704] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.343 14:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.343 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:25.343 "name": "Existed_Raid", 00:09:25.343 "aliases": [ 00:09:25.343 "dc705565-5c9b-4a0a-bced-ee82b90a84dc" 00:09:25.343 ], 00:09:25.343 "product_name": "Raid Volume", 00:09:25.343 "block_size": 512, 00:09:25.343 "num_blocks": 65536, 00:09:25.343 "uuid": "dc705565-5c9b-4a0a-bced-ee82b90a84dc", 00:09:25.343 "assigned_rate_limits": { 00:09:25.343 "rw_ios_per_sec": 0, 00:09:25.343 "rw_mbytes_per_sec": 0, 00:09:25.343 "r_mbytes_per_sec": 0, 00:09:25.343 "w_mbytes_per_sec": 0 00:09:25.343 }, 00:09:25.343 "claimed": false, 00:09:25.343 "zoned": false, 00:09:25.343 "supported_io_types": { 00:09:25.343 "read": true, 00:09:25.343 "write": true, 00:09:25.343 "unmap": false, 00:09:25.343 "flush": false, 00:09:25.343 "reset": true, 00:09:25.343 "nvme_admin": false, 00:09:25.343 "nvme_io": false, 00:09:25.343 "nvme_io_md": false, 00:09:25.343 "write_zeroes": true, 00:09:25.343 "zcopy": false, 00:09:25.343 "get_zone_info": false, 00:09:25.343 "zone_management": false, 00:09:25.343 "zone_append": false, 00:09:25.343 "compare": false, 00:09:25.343 "compare_and_write": false, 00:09:25.343 "abort": false, 00:09:25.343 "seek_hole": false, 00:09:25.343 "seek_data": false, 00:09:25.343 "copy": false, 00:09:25.343 "nvme_iov_md": false 00:09:25.343 }, 00:09:25.343 "memory_domains": [ 00:09:25.343 { 00:09:25.343 "dma_device_id": "system", 00:09:25.343 "dma_device_type": 1 00:09:25.343 }, 00:09:25.343 { 00:09:25.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.343 "dma_device_type": 2 00:09:25.343 }, 00:09:25.343 { 00:09:25.343 "dma_device_id": "system", 00:09:25.343 "dma_device_type": 1 00:09:25.343 }, 00:09:25.343 { 00:09:25.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.343 "dma_device_type": 2 00:09:25.343 } 00:09:25.343 ], 00:09:25.343 "driver_specific": { 00:09:25.343 "raid": { 00:09:25.343 "uuid": "dc705565-5c9b-4a0a-bced-ee82b90a84dc", 00:09:25.343 "strip_size_kb": 0, 00:09:25.343 "state": "online", 00:09:25.343 "raid_level": "raid1", 00:09:25.343 "superblock": false, 00:09:25.343 "num_base_bdevs": 2, 00:09:25.343 "num_base_bdevs_discovered": 2, 00:09:25.343 "num_base_bdevs_operational": 2, 00:09:25.343 "base_bdevs_list": [ 00:09:25.343 { 00:09:25.343 "name": "BaseBdev1", 00:09:25.343 "uuid": "b4290409-6a03-4790-8177-48cb553b59eb", 00:09:25.343 "is_configured": true, 00:09:25.343 "data_offset": 0, 00:09:25.343 "data_size": 65536 00:09:25.343 }, 00:09:25.343 { 00:09:25.343 "name": "BaseBdev2", 00:09:25.343 "uuid": "6131ab08-bc55-44a7-89f3-eb15314cc73f", 00:09:25.343 "is_configured": true, 00:09:25.343 "data_offset": 0, 00:09:25.343 "data_size": 65536 00:09:25.343 } 00:09:25.343 ] 00:09:25.343 } 00:09:25.343 } 00:09:25.343 }' 00:09:25.343 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.343 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:25.343 BaseBdev2' 00:09:25.343 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.343 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:25.343 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.343 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.343 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:25.343 14:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.343 14:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.343 14:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.602 [2024-11-20 14:20:04.382590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.602 "name": "Existed_Raid", 00:09:25.602 "uuid": "dc705565-5c9b-4a0a-bced-ee82b90a84dc", 00:09:25.602 "strip_size_kb": 0, 00:09:25.602 "state": "online", 00:09:25.602 "raid_level": "raid1", 00:09:25.602 "superblock": false, 00:09:25.602 "num_base_bdevs": 2, 00:09:25.602 "num_base_bdevs_discovered": 1, 00:09:25.602 "num_base_bdevs_operational": 1, 00:09:25.602 "base_bdevs_list": [ 00:09:25.602 { 00:09:25.602 "name": null, 00:09:25.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.602 "is_configured": false, 00:09:25.602 "data_offset": 0, 00:09:25.602 "data_size": 65536 00:09:25.602 }, 00:09:25.602 { 00:09:25.602 "name": "BaseBdev2", 00:09:25.602 "uuid": "6131ab08-bc55-44a7-89f3-eb15314cc73f", 00:09:25.602 "is_configured": true, 00:09:25.602 "data_offset": 0, 00:09:25.602 "data_size": 65536 00:09:25.602 } 00:09:25.602 ] 00:09:25.602 }' 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.602 14:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.169 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:26.169 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:26.169 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.169 14:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:26.169 14:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.169 14:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.169 14:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.169 14:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:26.169 14:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:26.169 14:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:26.169 14:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.169 14:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.169 [2024-11-20 14:20:05.028415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:26.169 [2024-11-20 14:20:05.028535] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:26.169 [2024-11-20 14:20:05.113669] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.169 [2024-11-20 14:20:05.113942] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.169 [2024-11-20 14:20:05.114139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:26.169 14:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.169 14:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:26.169 14:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:26.169 14:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.169 14:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.169 14:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.169 14:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:26.169 14:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.451 14:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:26.451 14:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:26.451 14:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:26.451 14:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62680 00:09:26.451 14:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62680 ']' 00:09:26.451 14:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62680 00:09:26.451 14:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:26.451 14:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.451 14:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62680 00:09:26.451 killing process with pid 62680 00:09:26.451 14:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:26.451 14:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:26.451 14:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62680' 00:09:26.451 14:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62680 00:09:26.451 [2024-11-20 14:20:05.202904] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:26.451 14:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62680 00:09:26.451 [2024-11-20 14:20:05.217555] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:27.389 00:09:27.389 real 0m5.596s 00:09:27.389 user 0m8.472s 00:09:27.389 sys 0m0.763s 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.389 ************************************ 00:09:27.389 END TEST raid_state_function_test 00:09:27.389 ************************************ 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.389 14:20:06 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:09:27.389 14:20:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:27.389 14:20:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.389 14:20:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:27.389 ************************************ 00:09:27.389 START TEST raid_state_function_test_sb 00:09:27.389 ************************************ 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:27.389 Process raid pid: 62933 00:09:27.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62933 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62933' 00:09:27.389 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:27.390 14:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62933 00:09:27.390 14:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62933 ']' 00:09:27.390 14:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.390 14:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.390 14:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.390 14:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.390 14:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.648 [2024-11-20 14:20:06.420791] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:09:27.648 [2024-11-20 14:20:06.421223] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.648 [2024-11-20 14:20:06.612099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.906 [2024-11-20 14:20:06.768770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.164 [2024-11-20 14:20:06.984524] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.164 [2024-11-20 14:20:06.984784] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.422 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.422 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:28.422 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:28.422 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.422 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.422 [2024-11-20 14:20:07.361365] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:28.422 [2024-11-20 14:20:07.361432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:28.422 [2024-11-20 14:20:07.361450] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.422 [2024-11-20 14:20:07.361467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.422 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.422 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:28.422 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.422 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.422 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.422 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.422 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:28.422 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.422 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.422 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.422 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.422 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.422 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.422 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.422 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.422 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.680 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.680 "name": "Existed_Raid", 00:09:28.680 "uuid": "777662c2-c98c-4f62-91e9-3ce0fce36ff5", 00:09:28.680 "strip_size_kb": 0, 00:09:28.680 "state": "configuring", 00:09:28.680 "raid_level": "raid1", 00:09:28.680 "superblock": true, 00:09:28.680 "num_base_bdevs": 2, 00:09:28.680 "num_base_bdevs_discovered": 0, 00:09:28.680 "num_base_bdevs_operational": 2, 00:09:28.680 "base_bdevs_list": [ 00:09:28.680 { 00:09:28.680 "name": "BaseBdev1", 00:09:28.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.680 "is_configured": false, 00:09:28.680 "data_offset": 0, 00:09:28.680 "data_size": 0 00:09:28.680 }, 00:09:28.680 { 00:09:28.680 "name": "BaseBdev2", 00:09:28.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.680 "is_configured": false, 00:09:28.680 "data_offset": 0, 00:09:28.680 "data_size": 0 00:09:28.680 } 00:09:28.680 ] 00:09:28.680 }' 00:09:28.680 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.680 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.938 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:28.938 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.938 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.938 [2024-11-20 14:20:07.881432] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:28.938 [2024-11-20 14:20:07.881477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:28.938 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.938 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:28.938 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.938 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.939 [2024-11-20 14:20:07.889407] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:28.939 [2024-11-20 14:20:07.889459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:28.939 [2024-11-20 14:20:07.889475] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.939 [2024-11-20 14:20:07.889494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.939 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.939 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:28.939 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.939 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.197 [2024-11-20 14:20:07.934217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.197 BaseBdev1 00:09:29.197 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.198 [ 00:09:29.198 { 00:09:29.198 "name": "BaseBdev1", 00:09:29.198 "aliases": [ 00:09:29.198 "b963222b-fb2a-43f3-bae0-0e56347c7491" 00:09:29.198 ], 00:09:29.198 "product_name": "Malloc disk", 00:09:29.198 "block_size": 512, 00:09:29.198 "num_blocks": 65536, 00:09:29.198 "uuid": "b963222b-fb2a-43f3-bae0-0e56347c7491", 00:09:29.198 "assigned_rate_limits": { 00:09:29.198 "rw_ios_per_sec": 0, 00:09:29.198 "rw_mbytes_per_sec": 0, 00:09:29.198 "r_mbytes_per_sec": 0, 00:09:29.198 "w_mbytes_per_sec": 0 00:09:29.198 }, 00:09:29.198 "claimed": true, 00:09:29.198 "claim_type": "exclusive_write", 00:09:29.198 "zoned": false, 00:09:29.198 "supported_io_types": { 00:09:29.198 "read": true, 00:09:29.198 "write": true, 00:09:29.198 "unmap": true, 00:09:29.198 "flush": true, 00:09:29.198 "reset": true, 00:09:29.198 "nvme_admin": false, 00:09:29.198 "nvme_io": false, 00:09:29.198 "nvme_io_md": false, 00:09:29.198 "write_zeroes": true, 00:09:29.198 "zcopy": true, 00:09:29.198 "get_zone_info": false, 00:09:29.198 "zone_management": false, 00:09:29.198 "zone_append": false, 00:09:29.198 "compare": false, 00:09:29.198 "compare_and_write": false, 00:09:29.198 "abort": true, 00:09:29.198 "seek_hole": false, 00:09:29.198 "seek_data": false, 00:09:29.198 "copy": true, 00:09:29.198 "nvme_iov_md": false 00:09:29.198 }, 00:09:29.198 "memory_domains": [ 00:09:29.198 { 00:09:29.198 "dma_device_id": "system", 00:09:29.198 "dma_device_type": 1 00:09:29.198 }, 00:09:29.198 { 00:09:29.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.198 "dma_device_type": 2 00:09:29.198 } 00:09:29.198 ], 00:09:29.198 "driver_specific": {} 00:09:29.198 } 00:09:29.198 ] 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.198 14:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.198 14:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.198 "name": "Existed_Raid", 00:09:29.198 "uuid": "50ce22d6-146f-4568-ae64-fbdfecf17cd1", 00:09:29.198 "strip_size_kb": 0, 00:09:29.198 "state": "configuring", 00:09:29.198 "raid_level": "raid1", 00:09:29.198 "superblock": true, 00:09:29.198 "num_base_bdevs": 2, 00:09:29.198 "num_base_bdevs_discovered": 1, 00:09:29.198 "num_base_bdevs_operational": 2, 00:09:29.198 "base_bdevs_list": [ 00:09:29.198 { 00:09:29.198 "name": "BaseBdev1", 00:09:29.198 "uuid": "b963222b-fb2a-43f3-bae0-0e56347c7491", 00:09:29.198 "is_configured": true, 00:09:29.198 "data_offset": 2048, 00:09:29.198 "data_size": 63488 00:09:29.198 }, 00:09:29.198 { 00:09:29.198 "name": "BaseBdev2", 00:09:29.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.198 "is_configured": false, 00:09:29.198 "data_offset": 0, 00:09:29.198 "data_size": 0 00:09:29.198 } 00:09:29.198 ] 00:09:29.198 }' 00:09:29.198 14:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.198 14:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.766 [2024-11-20 14:20:08.470407] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.766 [2024-11-20 14:20:08.470466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.766 [2024-11-20 14:20:08.478438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.766 [2024-11-20 14:20:08.480821] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.766 [2024-11-20 14:20:08.481024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.766 "name": "Existed_Raid", 00:09:29.766 "uuid": "6421c214-dd80-48f5-9bf7-bb0c8bdcabf2", 00:09:29.766 "strip_size_kb": 0, 00:09:29.766 "state": "configuring", 00:09:29.766 "raid_level": "raid1", 00:09:29.766 "superblock": true, 00:09:29.766 "num_base_bdevs": 2, 00:09:29.766 "num_base_bdevs_discovered": 1, 00:09:29.766 "num_base_bdevs_operational": 2, 00:09:29.766 "base_bdevs_list": [ 00:09:29.766 { 00:09:29.766 "name": "BaseBdev1", 00:09:29.766 "uuid": "b963222b-fb2a-43f3-bae0-0e56347c7491", 00:09:29.766 "is_configured": true, 00:09:29.766 "data_offset": 2048, 00:09:29.766 "data_size": 63488 00:09:29.766 }, 00:09:29.766 { 00:09:29.766 "name": "BaseBdev2", 00:09:29.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.766 "is_configured": false, 00:09:29.766 "data_offset": 0, 00:09:29.766 "data_size": 0 00:09:29.766 } 00:09:29.766 ] 00:09:29.766 }' 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.766 14:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.333 [2024-11-20 14:20:09.068665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.333 [2024-11-20 14:20:09.068979] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:30.333 [2024-11-20 14:20:09.069028] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:30.333 [2024-11-20 14:20:09.069381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:30.333 [2024-11-20 14:20:09.069588] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:30.333 [2024-11-20 14:20:09.069611] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:30.333 [2024-11-20 14:20:09.069781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.333 BaseBdev2 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.333 [ 00:09:30.333 { 00:09:30.333 "name": "BaseBdev2", 00:09:30.333 "aliases": [ 00:09:30.333 "a0f625ab-4c8b-4603-b35d-79038ed71300" 00:09:30.333 ], 00:09:30.333 "product_name": "Malloc disk", 00:09:30.333 "block_size": 512, 00:09:30.333 "num_blocks": 65536, 00:09:30.333 "uuid": "a0f625ab-4c8b-4603-b35d-79038ed71300", 00:09:30.333 "assigned_rate_limits": { 00:09:30.333 "rw_ios_per_sec": 0, 00:09:30.333 "rw_mbytes_per_sec": 0, 00:09:30.333 "r_mbytes_per_sec": 0, 00:09:30.333 "w_mbytes_per_sec": 0 00:09:30.333 }, 00:09:30.333 "claimed": true, 00:09:30.333 "claim_type": "exclusive_write", 00:09:30.333 "zoned": false, 00:09:30.333 "supported_io_types": { 00:09:30.333 "read": true, 00:09:30.333 "write": true, 00:09:30.333 "unmap": true, 00:09:30.333 "flush": true, 00:09:30.333 "reset": true, 00:09:30.333 "nvme_admin": false, 00:09:30.333 "nvme_io": false, 00:09:30.333 "nvme_io_md": false, 00:09:30.333 "write_zeroes": true, 00:09:30.333 "zcopy": true, 00:09:30.333 "get_zone_info": false, 00:09:30.333 "zone_management": false, 00:09:30.333 "zone_append": false, 00:09:30.333 "compare": false, 00:09:30.333 "compare_and_write": false, 00:09:30.333 "abort": true, 00:09:30.333 "seek_hole": false, 00:09:30.333 "seek_data": false, 00:09:30.333 "copy": true, 00:09:30.333 "nvme_iov_md": false 00:09:30.333 }, 00:09:30.333 "memory_domains": [ 00:09:30.333 { 00:09:30.333 "dma_device_id": "system", 00:09:30.333 "dma_device_type": 1 00:09:30.333 }, 00:09:30.333 { 00:09:30.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.333 "dma_device_type": 2 00:09:30.333 } 00:09:30.333 ], 00:09:30.333 "driver_specific": {} 00:09:30.333 } 00:09:30.333 ] 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.333 "name": "Existed_Raid", 00:09:30.333 "uuid": "6421c214-dd80-48f5-9bf7-bb0c8bdcabf2", 00:09:30.333 "strip_size_kb": 0, 00:09:30.333 "state": "online", 00:09:30.333 "raid_level": "raid1", 00:09:30.333 "superblock": true, 00:09:30.333 "num_base_bdevs": 2, 00:09:30.333 "num_base_bdevs_discovered": 2, 00:09:30.333 "num_base_bdevs_operational": 2, 00:09:30.333 "base_bdevs_list": [ 00:09:30.333 { 00:09:30.333 "name": "BaseBdev1", 00:09:30.333 "uuid": "b963222b-fb2a-43f3-bae0-0e56347c7491", 00:09:30.333 "is_configured": true, 00:09:30.333 "data_offset": 2048, 00:09:30.333 "data_size": 63488 00:09:30.333 }, 00:09:30.333 { 00:09:30.333 "name": "BaseBdev2", 00:09:30.333 "uuid": "a0f625ab-4c8b-4603-b35d-79038ed71300", 00:09:30.333 "is_configured": true, 00:09:30.333 "data_offset": 2048, 00:09:30.333 "data_size": 63488 00:09:30.333 } 00:09:30.333 ] 00:09:30.333 }' 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.333 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.898 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:30.898 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:30.898 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:30.898 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:30.898 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:30.898 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:30.898 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:30.898 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.898 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:30.898 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.898 [2024-11-20 14:20:09.621218] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.898 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.898 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:30.898 "name": "Existed_Raid", 00:09:30.898 "aliases": [ 00:09:30.898 "6421c214-dd80-48f5-9bf7-bb0c8bdcabf2" 00:09:30.898 ], 00:09:30.898 "product_name": "Raid Volume", 00:09:30.898 "block_size": 512, 00:09:30.898 "num_blocks": 63488, 00:09:30.898 "uuid": "6421c214-dd80-48f5-9bf7-bb0c8bdcabf2", 00:09:30.898 "assigned_rate_limits": { 00:09:30.898 "rw_ios_per_sec": 0, 00:09:30.898 "rw_mbytes_per_sec": 0, 00:09:30.898 "r_mbytes_per_sec": 0, 00:09:30.898 "w_mbytes_per_sec": 0 00:09:30.898 }, 00:09:30.898 "claimed": false, 00:09:30.898 "zoned": false, 00:09:30.898 "supported_io_types": { 00:09:30.898 "read": true, 00:09:30.898 "write": true, 00:09:30.898 "unmap": false, 00:09:30.898 "flush": false, 00:09:30.898 "reset": true, 00:09:30.898 "nvme_admin": false, 00:09:30.898 "nvme_io": false, 00:09:30.898 "nvme_io_md": false, 00:09:30.898 "write_zeroes": true, 00:09:30.898 "zcopy": false, 00:09:30.898 "get_zone_info": false, 00:09:30.898 "zone_management": false, 00:09:30.898 "zone_append": false, 00:09:30.898 "compare": false, 00:09:30.898 "compare_and_write": false, 00:09:30.898 "abort": false, 00:09:30.898 "seek_hole": false, 00:09:30.898 "seek_data": false, 00:09:30.898 "copy": false, 00:09:30.898 "nvme_iov_md": false 00:09:30.898 }, 00:09:30.898 "memory_domains": [ 00:09:30.898 { 00:09:30.898 "dma_device_id": "system", 00:09:30.898 "dma_device_type": 1 00:09:30.898 }, 00:09:30.898 { 00:09:30.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.899 "dma_device_type": 2 00:09:30.899 }, 00:09:30.899 { 00:09:30.899 "dma_device_id": "system", 00:09:30.899 "dma_device_type": 1 00:09:30.899 }, 00:09:30.899 { 00:09:30.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.899 "dma_device_type": 2 00:09:30.899 } 00:09:30.899 ], 00:09:30.899 "driver_specific": { 00:09:30.899 "raid": { 00:09:30.899 "uuid": "6421c214-dd80-48f5-9bf7-bb0c8bdcabf2", 00:09:30.899 "strip_size_kb": 0, 00:09:30.899 "state": "online", 00:09:30.899 "raid_level": "raid1", 00:09:30.899 "superblock": true, 00:09:30.899 "num_base_bdevs": 2, 00:09:30.899 "num_base_bdevs_discovered": 2, 00:09:30.899 "num_base_bdevs_operational": 2, 00:09:30.899 "base_bdevs_list": [ 00:09:30.899 { 00:09:30.899 "name": "BaseBdev1", 00:09:30.899 "uuid": "b963222b-fb2a-43f3-bae0-0e56347c7491", 00:09:30.899 "is_configured": true, 00:09:30.899 "data_offset": 2048, 00:09:30.899 "data_size": 63488 00:09:30.899 }, 00:09:30.899 { 00:09:30.899 "name": "BaseBdev2", 00:09:30.899 "uuid": "a0f625ab-4c8b-4603-b35d-79038ed71300", 00:09:30.899 "is_configured": true, 00:09:30.899 "data_offset": 2048, 00:09:30.899 "data_size": 63488 00:09:30.899 } 00:09:30.899 ] 00:09:30.899 } 00:09:30.899 } 00:09:30.899 }' 00:09:30.899 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:30.899 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:30.899 BaseBdev2' 00:09:30.899 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.899 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:30.899 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.899 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:30.899 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.899 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.899 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.899 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.899 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.899 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.899 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.899 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.899 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:30.899 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.899 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.899 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.899 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.899 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.899 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:30.899 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.899 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.899 [2024-11-20 14:20:09.852979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:31.157 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.157 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:31.157 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:31.157 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:31.157 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:31.157 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:31.157 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:31.157 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.157 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.157 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.157 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.157 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:31.157 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.157 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.157 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.157 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.157 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.157 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.157 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.157 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.157 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.157 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.157 "name": "Existed_Raid", 00:09:31.157 "uuid": "6421c214-dd80-48f5-9bf7-bb0c8bdcabf2", 00:09:31.157 "strip_size_kb": 0, 00:09:31.157 "state": "online", 00:09:31.157 "raid_level": "raid1", 00:09:31.157 "superblock": true, 00:09:31.157 "num_base_bdevs": 2, 00:09:31.157 "num_base_bdevs_discovered": 1, 00:09:31.157 "num_base_bdevs_operational": 1, 00:09:31.157 "base_bdevs_list": [ 00:09:31.157 { 00:09:31.157 "name": null, 00:09:31.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.157 "is_configured": false, 00:09:31.157 "data_offset": 0, 00:09:31.157 "data_size": 63488 00:09:31.157 }, 00:09:31.157 { 00:09:31.157 "name": "BaseBdev2", 00:09:31.157 "uuid": "a0f625ab-4c8b-4603-b35d-79038ed71300", 00:09:31.157 "is_configured": true, 00:09:31.157 "data_offset": 2048, 00:09:31.157 "data_size": 63488 00:09:31.157 } 00:09:31.157 ] 00:09:31.157 }' 00:09:31.157 14:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.157 14:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.761 [2024-11-20 14:20:10.507359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:31.761 [2024-11-20 14:20:10.507489] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:31.761 [2024-11-20 14:20:10.595169] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.761 [2024-11-20 14:20:10.595451] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.761 [2024-11-20 14:20:10.595670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62933 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62933 ']' 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62933 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62933 00:09:31.761 killing process with pid 62933 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62933' 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62933 00:09:31.761 [2024-11-20 14:20:10.680295] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:31.761 14:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62933 00:09:31.761 [2024-11-20 14:20:10.695032] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:33.138 ************************************ 00:09:33.138 END TEST raid_state_function_test_sb 00:09:33.138 ************************************ 00:09:33.138 14:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:33.138 00:09:33.138 real 0m5.430s 00:09:33.138 user 0m8.179s 00:09:33.138 sys 0m0.761s 00:09:33.138 14:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.138 14:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.138 14:20:11 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:09:33.138 14:20:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:33.138 14:20:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.138 14:20:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:33.138 ************************************ 00:09:33.138 START TEST raid_superblock_test 00:09:33.138 ************************************ 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63191 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:33.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63191 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63191 ']' 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.138 14:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.138 [2024-11-20 14:20:11.901720] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:09:33.138 [2024-11-20 14:20:11.902187] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63191 ] 00:09:33.138 [2024-11-20 14:20:12.089145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.397 [2024-11-20 14:20:12.217164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.656 [2024-11-20 14:20:12.417554] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.656 [2024-11-20 14:20:12.417730] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.224 malloc1 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.224 [2024-11-20 14:20:12.982080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:34.224 [2024-11-20 14:20:12.982153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.224 [2024-11-20 14:20:12.982183] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:34.224 [2024-11-20 14:20:12.982197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.224 [2024-11-20 14:20:12.985027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.224 [2024-11-20 14:20:12.985066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:34.224 pt1 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.224 14:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.224 malloc2 00:09:34.224 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.224 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:34.224 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.224 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.224 [2024-11-20 14:20:13.038334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:34.224 [2024-11-20 14:20:13.038542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.224 [2024-11-20 14:20:13.038624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:34.225 [2024-11-20 14:20:13.038741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.225 [2024-11-20 14:20:13.041523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.225 [2024-11-20 14:20:13.041684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:34.225 pt2 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.225 [2024-11-20 14:20:13.050444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:34.225 [2024-11-20 14:20:13.052981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:34.225 [2024-11-20 14:20:13.053344] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:34.225 [2024-11-20 14:20:13.053479] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:34.225 [2024-11-20 14:20:13.053858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:34.225 [2024-11-20 14:20:13.054195] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:34.225 [2024-11-20 14:20:13.054331] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:34.225 [2024-11-20 14:20:13.054596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.225 "name": "raid_bdev1", 00:09:34.225 "uuid": "e3535ff5-e438-4e55-a468-1beb2b9d7866", 00:09:34.225 "strip_size_kb": 0, 00:09:34.225 "state": "online", 00:09:34.225 "raid_level": "raid1", 00:09:34.225 "superblock": true, 00:09:34.225 "num_base_bdevs": 2, 00:09:34.225 "num_base_bdevs_discovered": 2, 00:09:34.225 "num_base_bdevs_operational": 2, 00:09:34.225 "base_bdevs_list": [ 00:09:34.225 { 00:09:34.225 "name": "pt1", 00:09:34.225 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:34.225 "is_configured": true, 00:09:34.225 "data_offset": 2048, 00:09:34.225 "data_size": 63488 00:09:34.225 }, 00:09:34.225 { 00:09:34.225 "name": "pt2", 00:09:34.225 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.225 "is_configured": true, 00:09:34.225 "data_offset": 2048, 00:09:34.225 "data_size": 63488 00:09:34.225 } 00:09:34.225 ] 00:09:34.225 }' 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.225 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.793 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:34.793 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:34.793 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:34.793 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:34.793 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:34.793 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:34.793 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:34.793 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:34.793 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.793 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.793 [2024-11-20 14:20:13.603150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.793 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.793 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:34.793 "name": "raid_bdev1", 00:09:34.793 "aliases": [ 00:09:34.793 "e3535ff5-e438-4e55-a468-1beb2b9d7866" 00:09:34.793 ], 00:09:34.793 "product_name": "Raid Volume", 00:09:34.793 "block_size": 512, 00:09:34.793 "num_blocks": 63488, 00:09:34.793 "uuid": "e3535ff5-e438-4e55-a468-1beb2b9d7866", 00:09:34.793 "assigned_rate_limits": { 00:09:34.793 "rw_ios_per_sec": 0, 00:09:34.793 "rw_mbytes_per_sec": 0, 00:09:34.793 "r_mbytes_per_sec": 0, 00:09:34.793 "w_mbytes_per_sec": 0 00:09:34.793 }, 00:09:34.793 "claimed": false, 00:09:34.793 "zoned": false, 00:09:34.793 "supported_io_types": { 00:09:34.793 "read": true, 00:09:34.793 "write": true, 00:09:34.793 "unmap": false, 00:09:34.793 "flush": false, 00:09:34.793 "reset": true, 00:09:34.793 "nvme_admin": false, 00:09:34.793 "nvme_io": false, 00:09:34.793 "nvme_io_md": false, 00:09:34.793 "write_zeroes": true, 00:09:34.793 "zcopy": false, 00:09:34.793 "get_zone_info": false, 00:09:34.793 "zone_management": false, 00:09:34.793 "zone_append": false, 00:09:34.793 "compare": false, 00:09:34.793 "compare_and_write": false, 00:09:34.793 "abort": false, 00:09:34.793 "seek_hole": false, 00:09:34.793 "seek_data": false, 00:09:34.793 "copy": false, 00:09:34.793 "nvme_iov_md": false 00:09:34.793 }, 00:09:34.793 "memory_domains": [ 00:09:34.793 { 00:09:34.793 "dma_device_id": "system", 00:09:34.793 "dma_device_type": 1 00:09:34.793 }, 00:09:34.793 { 00:09:34.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.793 "dma_device_type": 2 00:09:34.793 }, 00:09:34.793 { 00:09:34.793 "dma_device_id": "system", 00:09:34.793 "dma_device_type": 1 00:09:34.793 }, 00:09:34.793 { 00:09:34.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.793 "dma_device_type": 2 00:09:34.793 } 00:09:34.793 ], 00:09:34.793 "driver_specific": { 00:09:34.793 "raid": { 00:09:34.793 "uuid": "e3535ff5-e438-4e55-a468-1beb2b9d7866", 00:09:34.793 "strip_size_kb": 0, 00:09:34.793 "state": "online", 00:09:34.793 "raid_level": "raid1", 00:09:34.793 "superblock": true, 00:09:34.793 "num_base_bdevs": 2, 00:09:34.793 "num_base_bdevs_discovered": 2, 00:09:34.793 "num_base_bdevs_operational": 2, 00:09:34.793 "base_bdevs_list": [ 00:09:34.793 { 00:09:34.793 "name": "pt1", 00:09:34.793 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:34.793 "is_configured": true, 00:09:34.793 "data_offset": 2048, 00:09:34.793 "data_size": 63488 00:09:34.793 }, 00:09:34.793 { 00:09:34.793 "name": "pt2", 00:09:34.793 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.793 "is_configured": true, 00:09:34.793 "data_offset": 2048, 00:09:34.793 "data_size": 63488 00:09:34.793 } 00:09:34.793 ] 00:09:34.793 } 00:09:34.793 } 00:09:34.793 }' 00:09:34.793 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:34.793 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:34.793 pt2' 00:09:34.793 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.793 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:34.793 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.793 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:34.793 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.793 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.794 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.794 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:35.053 [2024-11-20 14:20:13.851192] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e3535ff5-e438-4e55-a468-1beb2b9d7866 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e3535ff5-e438-4e55-a468-1beb2b9d7866 ']' 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.053 [2024-11-20 14:20:13.890825] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:35.053 [2024-11-20 14:20:13.890857] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.053 [2024-11-20 14:20:13.890970] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.053 [2024-11-20 14:20:13.891083] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.053 [2024-11-20 14:20:13.891105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.053 14:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.053 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:35.053 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:35.053 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:35.053 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:35.053 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:35.053 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.053 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:35.053 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.053 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:35.053 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.053 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.053 [2024-11-20 14:20:14.022894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:35.053 [2024-11-20 14:20:14.025473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:35.053 [2024-11-20 14:20:14.025673] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:35.053 [2024-11-20 14:20:14.025921] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:35.053 [2024-11-20 14:20:14.026093] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:35.053 [2024-11-20 14:20:14.026237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:35.053 request: 00:09:35.053 { 00:09:35.053 "name": "raid_bdev1", 00:09:35.053 "raid_level": "raid1", 00:09:35.053 "base_bdevs": [ 00:09:35.053 "malloc1", 00:09:35.053 "malloc2" 00:09:35.053 ], 00:09:35.053 "superblock": false, 00:09:35.053 "method": "bdev_raid_create", 00:09:35.053 "req_id": 1 00:09:35.053 } 00:09:35.053 Got JSON-RPC error response 00:09:35.053 response: 00:09:35.053 { 00:09:35.053 "code": -17, 00:09:35.053 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:35.053 } 00:09:35.053 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:35.053 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:35.053 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:35.053 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.312 [2024-11-20 14:20:14.090904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:35.312 [2024-11-20 14:20:14.091006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.312 [2024-11-20 14:20:14.091041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:35.312 [2024-11-20 14:20:14.091059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.312 [2024-11-20 14:20:14.093888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.312 [2024-11-20 14:20:14.093939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:35.312 [2024-11-20 14:20:14.094063] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:35.312 [2024-11-20 14:20:14.094138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:35.312 pt1 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.312 "name": "raid_bdev1", 00:09:35.312 "uuid": "e3535ff5-e438-4e55-a468-1beb2b9d7866", 00:09:35.312 "strip_size_kb": 0, 00:09:35.312 "state": "configuring", 00:09:35.312 "raid_level": "raid1", 00:09:35.312 "superblock": true, 00:09:35.312 "num_base_bdevs": 2, 00:09:35.312 "num_base_bdevs_discovered": 1, 00:09:35.312 "num_base_bdevs_operational": 2, 00:09:35.312 "base_bdevs_list": [ 00:09:35.312 { 00:09:35.312 "name": "pt1", 00:09:35.312 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:35.312 "is_configured": true, 00:09:35.312 "data_offset": 2048, 00:09:35.312 "data_size": 63488 00:09:35.312 }, 00:09:35.312 { 00:09:35.312 "name": null, 00:09:35.312 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.312 "is_configured": false, 00:09:35.312 "data_offset": 2048, 00:09:35.312 "data_size": 63488 00:09:35.312 } 00:09:35.312 ] 00:09:35.312 }' 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.312 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.879 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:35.879 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:35.879 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:35.879 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:35.879 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.879 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.879 [2024-11-20 14:20:14.643071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:35.879 [2024-11-20 14:20:14.643173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.879 [2024-11-20 14:20:14.643204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:35.879 [2024-11-20 14:20:14.643221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.879 [2024-11-20 14:20:14.643774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.879 [2024-11-20 14:20:14.643822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:35.879 [2024-11-20 14:20:14.643924] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:35.879 [2024-11-20 14:20:14.643965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:35.879 [2024-11-20 14:20:14.644131] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:35.879 [2024-11-20 14:20:14.644159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:35.879 [2024-11-20 14:20:14.644473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:35.879 [2024-11-20 14:20:14.644795] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:35.880 [2024-11-20 14:20:14.644819] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:35.880 [2024-11-20 14:20:14.645014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.880 pt2 00:09:35.880 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.880 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:35.880 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:35.880 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:35.880 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.880 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.880 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.880 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.880 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.880 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.880 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.880 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.880 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.880 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.880 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.880 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.880 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.880 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.880 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.880 "name": "raid_bdev1", 00:09:35.880 "uuid": "e3535ff5-e438-4e55-a468-1beb2b9d7866", 00:09:35.880 "strip_size_kb": 0, 00:09:35.880 "state": "online", 00:09:35.880 "raid_level": "raid1", 00:09:35.880 "superblock": true, 00:09:35.880 "num_base_bdevs": 2, 00:09:35.880 "num_base_bdevs_discovered": 2, 00:09:35.880 "num_base_bdevs_operational": 2, 00:09:35.880 "base_bdevs_list": [ 00:09:35.880 { 00:09:35.880 "name": "pt1", 00:09:35.880 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:35.880 "is_configured": true, 00:09:35.880 "data_offset": 2048, 00:09:35.880 "data_size": 63488 00:09:35.880 }, 00:09:35.880 { 00:09:35.880 "name": "pt2", 00:09:35.880 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.880 "is_configured": true, 00:09:35.880 "data_offset": 2048, 00:09:35.880 "data_size": 63488 00:09:35.880 } 00:09:35.880 ] 00:09:35.880 }' 00:09:35.880 14:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.880 14:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.447 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:36.448 [2024-11-20 14:20:15.139494] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:36.448 "name": "raid_bdev1", 00:09:36.448 "aliases": [ 00:09:36.448 "e3535ff5-e438-4e55-a468-1beb2b9d7866" 00:09:36.448 ], 00:09:36.448 "product_name": "Raid Volume", 00:09:36.448 "block_size": 512, 00:09:36.448 "num_blocks": 63488, 00:09:36.448 "uuid": "e3535ff5-e438-4e55-a468-1beb2b9d7866", 00:09:36.448 "assigned_rate_limits": { 00:09:36.448 "rw_ios_per_sec": 0, 00:09:36.448 "rw_mbytes_per_sec": 0, 00:09:36.448 "r_mbytes_per_sec": 0, 00:09:36.448 "w_mbytes_per_sec": 0 00:09:36.448 }, 00:09:36.448 "claimed": false, 00:09:36.448 "zoned": false, 00:09:36.448 "supported_io_types": { 00:09:36.448 "read": true, 00:09:36.448 "write": true, 00:09:36.448 "unmap": false, 00:09:36.448 "flush": false, 00:09:36.448 "reset": true, 00:09:36.448 "nvme_admin": false, 00:09:36.448 "nvme_io": false, 00:09:36.448 "nvme_io_md": false, 00:09:36.448 "write_zeroes": true, 00:09:36.448 "zcopy": false, 00:09:36.448 "get_zone_info": false, 00:09:36.448 "zone_management": false, 00:09:36.448 "zone_append": false, 00:09:36.448 "compare": false, 00:09:36.448 "compare_and_write": false, 00:09:36.448 "abort": false, 00:09:36.448 "seek_hole": false, 00:09:36.448 "seek_data": false, 00:09:36.448 "copy": false, 00:09:36.448 "nvme_iov_md": false 00:09:36.448 }, 00:09:36.448 "memory_domains": [ 00:09:36.448 { 00:09:36.448 "dma_device_id": "system", 00:09:36.448 "dma_device_type": 1 00:09:36.448 }, 00:09:36.448 { 00:09:36.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.448 "dma_device_type": 2 00:09:36.448 }, 00:09:36.448 { 00:09:36.448 "dma_device_id": "system", 00:09:36.448 "dma_device_type": 1 00:09:36.448 }, 00:09:36.448 { 00:09:36.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.448 "dma_device_type": 2 00:09:36.448 } 00:09:36.448 ], 00:09:36.448 "driver_specific": { 00:09:36.448 "raid": { 00:09:36.448 "uuid": "e3535ff5-e438-4e55-a468-1beb2b9d7866", 00:09:36.448 "strip_size_kb": 0, 00:09:36.448 "state": "online", 00:09:36.448 "raid_level": "raid1", 00:09:36.448 "superblock": true, 00:09:36.448 "num_base_bdevs": 2, 00:09:36.448 "num_base_bdevs_discovered": 2, 00:09:36.448 "num_base_bdevs_operational": 2, 00:09:36.448 "base_bdevs_list": [ 00:09:36.448 { 00:09:36.448 "name": "pt1", 00:09:36.448 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:36.448 "is_configured": true, 00:09:36.448 "data_offset": 2048, 00:09:36.448 "data_size": 63488 00:09:36.448 }, 00:09:36.448 { 00:09:36.448 "name": "pt2", 00:09:36.448 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:36.448 "is_configured": true, 00:09:36.448 "data_offset": 2048, 00:09:36.448 "data_size": 63488 00:09:36.448 } 00:09:36.448 ] 00:09:36.448 } 00:09:36.448 } 00:09:36.448 }' 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:36.448 pt2' 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:36.448 [2024-11-20 14:20:15.407547] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.448 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.707 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e3535ff5-e438-4e55-a468-1beb2b9d7866 '!=' e3535ff5-e438-4e55-a468-1beb2b9d7866 ']' 00:09:36.707 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:36.707 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:36.707 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:36.707 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:36.707 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.707 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.707 [2024-11-20 14:20:15.451319] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:36.707 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.707 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:36.707 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:36.707 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.707 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.707 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.707 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:36.707 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.707 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.707 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.707 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.707 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.707 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.707 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.707 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.707 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.707 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.707 "name": "raid_bdev1", 00:09:36.707 "uuid": "e3535ff5-e438-4e55-a468-1beb2b9d7866", 00:09:36.707 "strip_size_kb": 0, 00:09:36.707 "state": "online", 00:09:36.707 "raid_level": "raid1", 00:09:36.707 "superblock": true, 00:09:36.707 "num_base_bdevs": 2, 00:09:36.707 "num_base_bdevs_discovered": 1, 00:09:36.707 "num_base_bdevs_operational": 1, 00:09:36.707 "base_bdevs_list": [ 00:09:36.707 { 00:09:36.707 "name": null, 00:09:36.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.707 "is_configured": false, 00:09:36.707 "data_offset": 0, 00:09:36.707 "data_size": 63488 00:09:36.707 }, 00:09:36.708 { 00:09:36.708 "name": "pt2", 00:09:36.708 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:36.708 "is_configured": true, 00:09:36.708 "data_offset": 2048, 00:09:36.708 "data_size": 63488 00:09:36.708 } 00:09:36.708 ] 00:09:36.708 }' 00:09:36.708 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.708 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.275 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:37.275 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.275 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.275 [2024-11-20 14:20:15.963414] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:37.275 [2024-11-20 14:20:15.963458] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.275 [2024-11-20 14:20:15.963551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.275 [2024-11-20 14:20:15.963617] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.275 [2024-11-20 14:20:15.963636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:37.275 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.275 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.275 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.275 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.275 14:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:37.275 14:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.275 [2024-11-20 14:20:16.039435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:37.275 [2024-11-20 14:20:16.039509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.275 [2024-11-20 14:20:16.039536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:37.275 [2024-11-20 14:20:16.039552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.275 [2024-11-20 14:20:16.042419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.275 [2024-11-20 14:20:16.042597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:37.275 [2024-11-20 14:20:16.042718] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:37.275 [2024-11-20 14:20:16.042782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:37.275 [2024-11-20 14:20:16.042910] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:37.275 [2024-11-20 14:20:16.042933] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:37.275 [2024-11-20 14:20:16.043263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:37.275 [2024-11-20 14:20:16.043456] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:37.275 [2024-11-20 14:20:16.043472] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:37.275 [2024-11-20 14:20:16.043688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.275 pt2 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.275 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.276 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.276 14:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.276 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:37.276 14:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.276 14:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.276 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.276 "name": "raid_bdev1", 00:09:37.276 "uuid": "e3535ff5-e438-4e55-a468-1beb2b9d7866", 00:09:37.276 "strip_size_kb": 0, 00:09:37.276 "state": "online", 00:09:37.276 "raid_level": "raid1", 00:09:37.276 "superblock": true, 00:09:37.276 "num_base_bdevs": 2, 00:09:37.276 "num_base_bdevs_discovered": 1, 00:09:37.276 "num_base_bdevs_operational": 1, 00:09:37.276 "base_bdevs_list": [ 00:09:37.276 { 00:09:37.276 "name": null, 00:09:37.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.276 "is_configured": false, 00:09:37.276 "data_offset": 2048, 00:09:37.276 "data_size": 63488 00:09:37.276 }, 00:09:37.276 { 00:09:37.276 "name": "pt2", 00:09:37.276 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:37.276 "is_configured": true, 00:09:37.276 "data_offset": 2048, 00:09:37.276 "data_size": 63488 00:09:37.276 } 00:09:37.276 ] 00:09:37.276 }' 00:09:37.276 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.276 14:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.842 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:37.842 14:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.842 14:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.842 [2024-11-20 14:20:16.587747] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:37.842 [2024-11-20 14:20:16.587783] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.842 [2024-11-20 14:20:16.587880] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.842 [2024-11-20 14:20:16.587946] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.842 [2024-11-20 14:20:16.587961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:37.842 14:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.842 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.842 14:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.842 14:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.842 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:37.842 14:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.842 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:37.842 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:37.842 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:09:37.842 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:37.842 14:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.842 14:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.842 [2024-11-20 14:20:16.651805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:37.843 [2024-11-20 14:20:16.652056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.843 [2024-11-20 14:20:16.652104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:09:37.843 [2024-11-20 14:20:16.652121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.843 [2024-11-20 14:20:16.654965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.843 [2024-11-20 14:20:16.655029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:37.843 [2024-11-20 14:20:16.655141] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:37.843 [2024-11-20 14:20:16.655197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:37.843 [2024-11-20 14:20:16.655371] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:37.843 [2024-11-20 14:20:16.655390] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:37.843 [2024-11-20 14:20:16.655411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:37.843 [2024-11-20 14:20:16.655475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:37.843 [2024-11-20 14:20:16.655577] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:37.843 [2024-11-20 14:20:16.655592] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:37.843 [2024-11-20 14:20:16.655909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:37.843 [2024-11-20 14:20:16.656125] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:37.843 [2024-11-20 14:20:16.656148] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:37.843 [2024-11-20 14:20:16.656379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.843 pt1 00:09:37.843 14:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.843 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:09:37.843 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:37.843 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:37.843 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.843 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.843 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.843 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:37.843 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.843 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.843 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.843 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.843 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.843 14:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.843 14:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.843 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:37.843 14:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.843 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.843 "name": "raid_bdev1", 00:09:37.843 "uuid": "e3535ff5-e438-4e55-a468-1beb2b9d7866", 00:09:37.843 "strip_size_kb": 0, 00:09:37.843 "state": "online", 00:09:37.843 "raid_level": "raid1", 00:09:37.843 "superblock": true, 00:09:37.843 "num_base_bdevs": 2, 00:09:37.843 "num_base_bdevs_discovered": 1, 00:09:37.843 "num_base_bdevs_operational": 1, 00:09:37.843 "base_bdevs_list": [ 00:09:37.843 { 00:09:37.843 "name": null, 00:09:37.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.843 "is_configured": false, 00:09:37.843 "data_offset": 2048, 00:09:37.843 "data_size": 63488 00:09:37.843 }, 00:09:37.843 { 00:09:37.843 "name": "pt2", 00:09:37.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:37.843 "is_configured": true, 00:09:37.843 "data_offset": 2048, 00:09:37.843 "data_size": 63488 00:09:37.843 } 00:09:37.843 ] 00:09:37.843 }' 00:09:37.843 14:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.843 14:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.428 14:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:38.428 14:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:38.428 14:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.428 14:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.428 14:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.428 14:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:38.428 14:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:38.428 14:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.428 14:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.428 14:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:38.428 [2024-11-20 14:20:17.260759] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.429 14:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.429 14:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e3535ff5-e438-4e55-a468-1beb2b9d7866 '!=' e3535ff5-e438-4e55-a468-1beb2b9d7866 ']' 00:09:38.429 14:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63191 00:09:38.429 14:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63191 ']' 00:09:38.429 14:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63191 00:09:38.429 14:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:38.429 14:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.429 14:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63191 00:09:38.429 killing process with pid 63191 00:09:38.429 14:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:38.429 14:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:38.429 14:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63191' 00:09:38.429 14:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63191 00:09:38.429 14:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63191 00:09:38.429 [2024-11-20 14:20:17.324463] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:38.429 [2024-11-20 14:20:17.324583] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.429 [2024-11-20 14:20:17.324648] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:38.429 [2024-11-20 14:20:17.324674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:38.687 [2024-11-20 14:20:17.512260] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:39.622 14:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:39.622 00:09:39.622 real 0m6.759s 00:09:39.622 user 0m10.721s 00:09:39.622 sys 0m0.935s 00:09:39.622 ************************************ 00:09:39.622 END TEST raid_superblock_test 00:09:39.622 ************************************ 00:09:39.622 14:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.622 14:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.622 14:20:18 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:09:39.622 14:20:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:39.622 14:20:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.622 14:20:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:39.622 ************************************ 00:09:39.622 START TEST raid_read_error_test 00:09:39.622 ************************************ 00:09:39.622 14:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:09:39.622 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:39.622 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:39.623 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:39.623 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:39.623 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:39.623 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:39.623 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:39.623 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:39.623 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:39.623 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:39.623 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:39.881 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:39.881 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:39.881 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:39.881 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:39.881 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:39.881 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:39.881 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:39.881 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:39.881 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:39.881 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:39.881 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HNV9GRzkyy 00:09:39.881 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63521 00:09:39.881 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63521 00:09:39.881 14:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:39.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.881 14:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63521 ']' 00:09:39.881 14:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.881 14:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.881 14:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.881 14:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.881 14:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.881 [2024-11-20 14:20:18.705109] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:09:39.881 [2024-11-20 14:20:18.705466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63521 ] 00:09:40.139 [2024-11-20 14:20:18.879701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.139 [2024-11-20 14:20:19.010760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.396 [2024-11-20 14:20:19.215109] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.396 [2024-11-20 14:20:19.215212] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.962 BaseBdev1_malloc 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.962 true 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.962 [2024-11-20 14:20:19.823430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:40.962 [2024-11-20 14:20:19.823501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.962 [2024-11-20 14:20:19.823532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:40.962 [2024-11-20 14:20:19.823550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.962 [2024-11-20 14:20:19.826368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.962 [2024-11-20 14:20:19.826418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:40.962 BaseBdev1 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.962 BaseBdev2_malloc 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.962 true 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.962 [2024-11-20 14:20:19.879390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:40.962 [2024-11-20 14:20:19.879629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.962 [2024-11-20 14:20:19.879667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:40.962 [2024-11-20 14:20:19.879687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.962 [2024-11-20 14:20:19.882563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.962 [2024-11-20 14:20:19.882612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:40.962 BaseBdev2 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.962 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.962 [2024-11-20 14:20:19.887515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.962 [2024-11-20 14:20:19.889972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.962 [2024-11-20 14:20:19.890267] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:40.962 [2024-11-20 14:20:19.890299] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:40.962 [2024-11-20 14:20:19.890639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:40.962 [2024-11-20 14:20:19.890879] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:40.962 [2024-11-20 14:20:19.890903] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:40.962 [2024-11-20 14:20:19.891148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.963 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.963 14:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:40.963 14:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.963 14:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.963 14:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.963 14:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.963 14:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:40.963 14:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.963 14:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.963 14:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.963 14:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.963 14:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.963 14:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.963 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.963 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.963 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.221 14:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.221 "name": "raid_bdev1", 00:09:41.221 "uuid": "7e280073-b4b6-4049-81a8-c3986de842c7", 00:09:41.221 "strip_size_kb": 0, 00:09:41.221 "state": "online", 00:09:41.221 "raid_level": "raid1", 00:09:41.221 "superblock": true, 00:09:41.221 "num_base_bdevs": 2, 00:09:41.221 "num_base_bdevs_discovered": 2, 00:09:41.221 "num_base_bdevs_operational": 2, 00:09:41.221 "base_bdevs_list": [ 00:09:41.221 { 00:09:41.221 "name": "BaseBdev1", 00:09:41.221 "uuid": "1b5304c4-ec78-58a9-927c-ef1c97ceb4ed", 00:09:41.221 "is_configured": true, 00:09:41.221 "data_offset": 2048, 00:09:41.221 "data_size": 63488 00:09:41.221 }, 00:09:41.221 { 00:09:41.221 "name": "BaseBdev2", 00:09:41.221 "uuid": "ee4dd706-fe2b-54c5-bdfd-3c111f654202", 00:09:41.221 "is_configured": true, 00:09:41.221 "data_offset": 2048, 00:09:41.221 "data_size": 63488 00:09:41.221 } 00:09:41.221 ] 00:09:41.221 }' 00:09:41.221 14:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.221 14:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.479 14:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:41.479 14:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:41.737 [2024-11-20 14:20:20.473027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.675 "name": "raid_bdev1", 00:09:42.675 "uuid": "7e280073-b4b6-4049-81a8-c3986de842c7", 00:09:42.675 "strip_size_kb": 0, 00:09:42.675 "state": "online", 00:09:42.675 "raid_level": "raid1", 00:09:42.675 "superblock": true, 00:09:42.675 "num_base_bdevs": 2, 00:09:42.675 "num_base_bdevs_discovered": 2, 00:09:42.675 "num_base_bdevs_operational": 2, 00:09:42.675 "base_bdevs_list": [ 00:09:42.675 { 00:09:42.675 "name": "BaseBdev1", 00:09:42.675 "uuid": "1b5304c4-ec78-58a9-927c-ef1c97ceb4ed", 00:09:42.675 "is_configured": true, 00:09:42.675 "data_offset": 2048, 00:09:42.675 "data_size": 63488 00:09:42.675 }, 00:09:42.675 { 00:09:42.675 "name": "BaseBdev2", 00:09:42.675 "uuid": "ee4dd706-fe2b-54c5-bdfd-3c111f654202", 00:09:42.675 "is_configured": true, 00:09:42.675 "data_offset": 2048, 00:09:42.675 "data_size": 63488 00:09:42.675 } 00:09:42.675 ] 00:09:42.675 }' 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.675 14:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.934 14:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:42.934 14:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.934 14:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.934 [2024-11-20 14:20:21.912036] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:42.934 [2024-11-20 14:20:21.912090] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:43.193 [2024-11-20 14:20:21.915418] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.193 [2024-11-20 14:20:21.915482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.193 [2024-11-20 14:20:21.915587] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:43.193 [2024-11-20 14:20:21.915607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:43.193 { 00:09:43.193 "results": [ 00:09:43.193 { 00:09:43.193 "job": "raid_bdev1", 00:09:43.193 "core_mask": "0x1", 00:09:43.193 "workload": "randrw", 00:09:43.193 "percentage": 50, 00:09:43.193 "status": "finished", 00:09:43.193 "queue_depth": 1, 00:09:43.193 "io_size": 131072, 00:09:43.193 "runtime": 1.436558, 00:09:43.193 "iops": 12339.912485259905, 00:09:43.193 "mibps": 1542.4890606574882, 00:09:43.193 "io_failed": 0, 00:09:43.193 "io_timeout": 0, 00:09:43.193 "avg_latency_us": 76.72812197110724, 00:09:43.193 "min_latency_us": 43.28727272727273, 00:09:43.193 "max_latency_us": 1891.6072727272726 00:09:43.193 } 00:09:43.193 ], 00:09:43.193 "core_count": 1 00:09:43.193 } 00:09:43.193 14:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.193 14:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63521 00:09:43.193 14:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63521 ']' 00:09:43.193 14:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63521 00:09:43.193 14:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:43.193 14:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:43.193 14:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63521 00:09:43.193 14:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:43.193 killing process with pid 63521 00:09:43.193 14:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:43.193 14:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63521' 00:09:43.193 14:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63521 00:09:43.193 14:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63521 00:09:43.193 [2024-11-20 14:20:21.959681] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:43.193 [2024-11-20 14:20:22.081631] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.571 14:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HNV9GRzkyy 00:09:44.571 14:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:44.571 14:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:44.571 14:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:44.571 14:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:44.571 14:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:44.571 14:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:44.571 14:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:44.571 ************************************ 00:09:44.571 END TEST raid_read_error_test 00:09:44.571 ************************************ 00:09:44.571 00:09:44.571 real 0m4.571s 00:09:44.571 user 0m5.758s 00:09:44.571 sys 0m0.560s 00:09:44.571 14:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.571 14:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.571 14:20:23 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:09:44.571 14:20:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:44.571 14:20:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.571 14:20:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.571 ************************************ 00:09:44.571 START TEST raid_write_error_test 00:09:44.571 ************************************ 00:09:44.571 14:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:09:44.571 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:44.571 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:44.571 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:44.571 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:44.571 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.571 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:44.571 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.571 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.571 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:44.571 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.571 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.571 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:44.571 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:44.571 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:44.571 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:44.571 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:44.571 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:44.571 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:44.571 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:44.571 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:44.571 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:44.571 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2OuAK0GFf2 00:09:44.572 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63674 00:09:44.572 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63674 00:09:44.572 14:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63674 ']' 00:09:44.572 14:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:44.572 14:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.572 14:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.572 14:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.572 14:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.572 14:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.572 [2024-11-20 14:20:23.323615] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:09:44.572 [2024-11-20 14:20:23.323775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63674 ] 00:09:44.572 [2024-11-20 14:20:23.500057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.830 [2024-11-20 14:20:23.628297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.088 [2024-11-20 14:20:23.829635] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.088 [2024-11-20 14:20:23.829722] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.655 BaseBdev1_malloc 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.655 true 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.655 [2024-11-20 14:20:24.405936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:45.655 [2024-11-20 14:20:24.406015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.655 [2024-11-20 14:20:24.406045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:45.655 [2024-11-20 14:20:24.406065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.655 [2024-11-20 14:20:24.408753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.655 [2024-11-20 14:20:24.408807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:45.655 BaseBdev1 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.655 BaseBdev2_malloc 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.655 true 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.655 [2024-11-20 14:20:24.461551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:45.655 [2024-11-20 14:20:24.461622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.655 [2024-11-20 14:20:24.461647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:45.655 [2024-11-20 14:20:24.461664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.655 [2024-11-20 14:20:24.464391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.655 [2024-11-20 14:20:24.464440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:45.655 BaseBdev2 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.655 [2024-11-20 14:20:24.469613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.655 [2024-11-20 14:20:24.472085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.655 [2024-11-20 14:20:24.472345] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:45.655 [2024-11-20 14:20:24.472374] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:45.655 [2024-11-20 14:20:24.472677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:45.655 [2024-11-20 14:20:24.472906] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:45.655 [2024-11-20 14:20:24.472929] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:45.655 [2024-11-20 14:20:24.473134] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.655 "name": "raid_bdev1", 00:09:45.655 "uuid": "392dd5d0-47e2-4cb7-ab74-29063670829d", 00:09:45.655 "strip_size_kb": 0, 00:09:45.655 "state": "online", 00:09:45.655 "raid_level": "raid1", 00:09:45.655 "superblock": true, 00:09:45.655 "num_base_bdevs": 2, 00:09:45.655 "num_base_bdevs_discovered": 2, 00:09:45.655 "num_base_bdevs_operational": 2, 00:09:45.655 "base_bdevs_list": [ 00:09:45.655 { 00:09:45.655 "name": "BaseBdev1", 00:09:45.655 "uuid": "cbd7410e-2682-55fe-8f51-88cb8e77324e", 00:09:45.655 "is_configured": true, 00:09:45.655 "data_offset": 2048, 00:09:45.655 "data_size": 63488 00:09:45.655 }, 00:09:45.655 { 00:09:45.655 "name": "BaseBdev2", 00:09:45.655 "uuid": "60de4cd7-602c-53e6-8c4b-cdc5e2b47631", 00:09:45.655 "is_configured": true, 00:09:45.655 "data_offset": 2048, 00:09:45.655 "data_size": 63488 00:09:45.655 } 00:09:45.655 ] 00:09:45.655 }' 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.655 14:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.222 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:46.222 14:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:46.222 [2024-11-20 14:20:25.123173] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:47.158 14:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:47.158 14:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.158 14:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.158 [2024-11-20 14:20:25.994744] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:47.158 [2024-11-20 14:20:25.994828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:47.158 [2024-11-20 14:20:25.995086] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:09:47.158 14:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.158 14:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:47.158 14:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:47.158 14:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:47.158 14:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:09:47.158 14:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:47.158 14:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.158 14:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.158 14:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.158 14:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.158 14:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:47.158 14:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.158 14:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.158 14:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.158 14:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.158 14:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.158 14:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.158 14:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.158 14:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.158 14:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.158 14:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.158 "name": "raid_bdev1", 00:09:47.158 "uuid": "392dd5d0-47e2-4cb7-ab74-29063670829d", 00:09:47.158 "strip_size_kb": 0, 00:09:47.158 "state": "online", 00:09:47.158 "raid_level": "raid1", 00:09:47.158 "superblock": true, 00:09:47.158 "num_base_bdevs": 2, 00:09:47.158 "num_base_bdevs_discovered": 1, 00:09:47.158 "num_base_bdevs_operational": 1, 00:09:47.158 "base_bdevs_list": [ 00:09:47.158 { 00:09:47.158 "name": null, 00:09:47.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.158 "is_configured": false, 00:09:47.158 "data_offset": 0, 00:09:47.158 "data_size": 63488 00:09:47.158 }, 00:09:47.158 { 00:09:47.158 "name": "BaseBdev2", 00:09:47.158 "uuid": "60de4cd7-602c-53e6-8c4b-cdc5e2b47631", 00:09:47.158 "is_configured": true, 00:09:47.158 "data_offset": 2048, 00:09:47.158 "data_size": 63488 00:09:47.158 } 00:09:47.158 ] 00:09:47.158 }' 00:09:47.158 14:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.158 14:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.725 14:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:47.725 14:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.725 14:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.725 [2024-11-20 14:20:26.530114] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:47.725 [2024-11-20 14:20:26.530154] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.725 [2024-11-20 14:20:26.533372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.725 [2024-11-20 14:20:26.533429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.725 [2024-11-20 14:20:26.533511] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.725 [2024-11-20 14:20:26.533530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:47.725 { 00:09:47.725 "results": [ 00:09:47.725 { 00:09:47.725 "job": "raid_bdev1", 00:09:47.725 "core_mask": "0x1", 00:09:47.725 "workload": "randrw", 00:09:47.725 "percentage": 50, 00:09:47.725 "status": "finished", 00:09:47.725 "queue_depth": 1, 00:09:47.725 "io_size": 131072, 00:09:47.725 "runtime": 1.404586, 00:09:47.725 "iops": 14810.057910302396, 00:09:47.725 "mibps": 1851.2572387877995, 00:09:47.725 "io_failed": 0, 00:09:47.725 "io_timeout": 0, 00:09:47.725 "avg_latency_us": 63.39120818802389, 00:09:47.725 "min_latency_us": 41.192727272727275, 00:09:47.725 "max_latency_us": 1809.6872727272728 00:09:47.725 } 00:09:47.725 ], 00:09:47.725 "core_count": 1 00:09:47.725 } 00:09:47.725 14:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.725 14:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63674 00:09:47.725 14:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63674 ']' 00:09:47.725 14:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63674 00:09:47.725 14:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:47.725 14:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.725 14:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63674 00:09:47.725 14:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.725 killing process with pid 63674 00:09:47.725 14:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.725 14:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63674' 00:09:47.725 14:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63674 00:09:47.725 [2024-11-20 14:20:26.567213] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.725 14:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63674 00:09:47.725 [2024-11-20 14:20:26.687307] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.102 14:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2OuAK0GFf2 00:09:49.102 14:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:49.102 14:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:49.102 14:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:49.102 14:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:49.102 14:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:49.102 14:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:49.102 14:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:49.102 00:09:49.102 real 0m4.568s 00:09:49.102 user 0m5.810s 00:09:49.102 sys 0m0.505s 00:09:49.102 14:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.102 ************************************ 00:09:49.102 END TEST raid_write_error_test 00:09:49.102 ************************************ 00:09:49.102 14:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.102 14:20:27 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:49.102 14:20:27 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:49.102 14:20:27 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:09:49.102 14:20:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:49.102 14:20:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.102 14:20:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:49.102 ************************************ 00:09:49.102 START TEST raid_state_function_test 00:09:49.102 ************************************ 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:49.102 Process raid pid: 63812 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63812 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63812' 00:09:49.102 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:49.103 14:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63812 00:09:49.103 14:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63812 ']' 00:09:49.103 14:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.103 14:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.103 14:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.103 14:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.103 14:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.103 [2024-11-20 14:20:27.949422] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:09:49.103 [2024-11-20 14:20:27.949611] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.361 [2024-11-20 14:20:28.141854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.361 [2024-11-20 14:20:28.297180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.619 [2024-11-20 14:20:28.508674] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.619 [2024-11-20 14:20:28.508716] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.185 14:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.185 14:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:50.185 14:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:50.185 14:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.185 14:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.185 [2024-11-20 14:20:28.968009] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:50.185 [2024-11-20 14:20:28.968079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:50.185 [2024-11-20 14:20:28.968098] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:50.186 [2024-11-20 14:20:28.968115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:50.186 [2024-11-20 14:20:28.968126] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:50.186 [2024-11-20 14:20:28.968141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:50.186 14:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.186 14:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:50.186 14:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.186 14:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.186 14:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.186 14:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.186 14:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.186 14:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.186 14:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.186 14:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.186 14:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.186 14:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.186 14:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.186 14:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.186 14:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.186 14:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.186 14:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.186 "name": "Existed_Raid", 00:09:50.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.186 "strip_size_kb": 64, 00:09:50.186 "state": "configuring", 00:09:50.186 "raid_level": "raid0", 00:09:50.186 "superblock": false, 00:09:50.186 "num_base_bdevs": 3, 00:09:50.186 "num_base_bdevs_discovered": 0, 00:09:50.186 "num_base_bdevs_operational": 3, 00:09:50.186 "base_bdevs_list": [ 00:09:50.186 { 00:09:50.186 "name": "BaseBdev1", 00:09:50.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.186 "is_configured": false, 00:09:50.186 "data_offset": 0, 00:09:50.186 "data_size": 0 00:09:50.186 }, 00:09:50.186 { 00:09:50.186 "name": "BaseBdev2", 00:09:50.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.186 "is_configured": false, 00:09:50.186 "data_offset": 0, 00:09:50.186 "data_size": 0 00:09:50.186 }, 00:09:50.186 { 00:09:50.186 "name": "BaseBdev3", 00:09:50.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.186 "is_configured": false, 00:09:50.186 "data_offset": 0, 00:09:50.186 "data_size": 0 00:09:50.186 } 00:09:50.186 ] 00:09:50.186 }' 00:09:50.186 14:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.186 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.757 [2024-11-20 14:20:29.492055] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:50.757 [2024-11-20 14:20:29.492099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.757 [2024-11-20 14:20:29.504051] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:50.757 [2024-11-20 14:20:29.504230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:50.757 [2024-11-20 14:20:29.504348] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:50.757 [2024-11-20 14:20:29.504409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:50.757 [2024-11-20 14:20:29.504621] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:50.757 [2024-11-20 14:20:29.504689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.757 [2024-11-20 14:20:29.548563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:50.757 BaseBdev1 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.757 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.757 [ 00:09:50.757 { 00:09:50.758 "name": "BaseBdev1", 00:09:50.758 "aliases": [ 00:09:50.758 "bfb664cc-d058-4f1a-9627-09f6cdd8dd2f" 00:09:50.758 ], 00:09:50.758 "product_name": "Malloc disk", 00:09:50.758 "block_size": 512, 00:09:50.758 "num_blocks": 65536, 00:09:50.758 "uuid": "bfb664cc-d058-4f1a-9627-09f6cdd8dd2f", 00:09:50.758 "assigned_rate_limits": { 00:09:50.758 "rw_ios_per_sec": 0, 00:09:50.758 "rw_mbytes_per_sec": 0, 00:09:50.758 "r_mbytes_per_sec": 0, 00:09:50.758 "w_mbytes_per_sec": 0 00:09:50.758 }, 00:09:50.758 "claimed": true, 00:09:50.758 "claim_type": "exclusive_write", 00:09:50.758 "zoned": false, 00:09:50.758 "supported_io_types": { 00:09:50.758 "read": true, 00:09:50.758 "write": true, 00:09:50.758 "unmap": true, 00:09:50.758 "flush": true, 00:09:50.758 "reset": true, 00:09:50.758 "nvme_admin": false, 00:09:50.758 "nvme_io": false, 00:09:50.758 "nvme_io_md": false, 00:09:50.758 "write_zeroes": true, 00:09:50.758 "zcopy": true, 00:09:50.758 "get_zone_info": false, 00:09:50.758 "zone_management": false, 00:09:50.758 "zone_append": false, 00:09:50.758 "compare": false, 00:09:50.758 "compare_and_write": false, 00:09:50.758 "abort": true, 00:09:50.758 "seek_hole": false, 00:09:50.758 "seek_data": false, 00:09:50.758 "copy": true, 00:09:50.758 "nvme_iov_md": false 00:09:50.758 }, 00:09:50.758 "memory_domains": [ 00:09:50.758 { 00:09:50.758 "dma_device_id": "system", 00:09:50.758 "dma_device_type": 1 00:09:50.758 }, 00:09:50.758 { 00:09:50.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.758 "dma_device_type": 2 00:09:50.758 } 00:09:50.758 ], 00:09:50.758 "driver_specific": {} 00:09:50.758 } 00:09:50.758 ] 00:09:50.758 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.758 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:50.758 14:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:50.758 14:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.758 14:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.758 14:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.758 14:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.758 14:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.758 14:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.758 14:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.758 14:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.758 14:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.758 14:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.758 14:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.758 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.758 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.758 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.758 14:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.758 "name": "Existed_Raid", 00:09:50.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.758 "strip_size_kb": 64, 00:09:50.758 "state": "configuring", 00:09:50.758 "raid_level": "raid0", 00:09:50.758 "superblock": false, 00:09:50.758 "num_base_bdevs": 3, 00:09:50.758 "num_base_bdevs_discovered": 1, 00:09:50.758 "num_base_bdevs_operational": 3, 00:09:50.758 "base_bdevs_list": [ 00:09:50.758 { 00:09:50.758 "name": "BaseBdev1", 00:09:50.758 "uuid": "bfb664cc-d058-4f1a-9627-09f6cdd8dd2f", 00:09:50.758 "is_configured": true, 00:09:50.758 "data_offset": 0, 00:09:50.758 "data_size": 65536 00:09:50.758 }, 00:09:50.758 { 00:09:50.758 "name": "BaseBdev2", 00:09:50.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.758 "is_configured": false, 00:09:50.758 "data_offset": 0, 00:09:50.758 "data_size": 0 00:09:50.758 }, 00:09:50.758 { 00:09:50.758 "name": "BaseBdev3", 00:09:50.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.758 "is_configured": false, 00:09:50.758 "data_offset": 0, 00:09:50.758 "data_size": 0 00:09:50.758 } 00:09:50.758 ] 00:09:50.758 }' 00:09:50.758 14:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.758 14:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.324 [2024-11-20 14:20:30.116796] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:51.324 [2024-11-20 14:20:30.116863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.324 [2024-11-20 14:20:30.124880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.324 [2024-11-20 14:20:30.127299] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:51.324 [2024-11-20 14:20:30.127487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:51.324 [2024-11-20 14:20:30.127524] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:51.324 [2024-11-20 14:20:30.127542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.324 "name": "Existed_Raid", 00:09:51.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.324 "strip_size_kb": 64, 00:09:51.324 "state": "configuring", 00:09:51.324 "raid_level": "raid0", 00:09:51.324 "superblock": false, 00:09:51.324 "num_base_bdevs": 3, 00:09:51.324 "num_base_bdevs_discovered": 1, 00:09:51.324 "num_base_bdevs_operational": 3, 00:09:51.324 "base_bdevs_list": [ 00:09:51.324 { 00:09:51.324 "name": "BaseBdev1", 00:09:51.324 "uuid": "bfb664cc-d058-4f1a-9627-09f6cdd8dd2f", 00:09:51.324 "is_configured": true, 00:09:51.324 "data_offset": 0, 00:09:51.324 "data_size": 65536 00:09:51.324 }, 00:09:51.324 { 00:09:51.324 "name": "BaseBdev2", 00:09:51.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.324 "is_configured": false, 00:09:51.324 "data_offset": 0, 00:09:51.324 "data_size": 0 00:09:51.324 }, 00:09:51.324 { 00:09:51.324 "name": "BaseBdev3", 00:09:51.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.324 "is_configured": false, 00:09:51.324 "data_offset": 0, 00:09:51.324 "data_size": 0 00:09:51.324 } 00:09:51.324 ] 00:09:51.324 }' 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.324 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.905 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.906 [2024-11-20 14:20:30.676488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:51.906 BaseBdev2 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.906 [ 00:09:51.906 { 00:09:51.906 "name": "BaseBdev2", 00:09:51.906 "aliases": [ 00:09:51.906 "8648112a-72ba-4e63-827c-48ef988bb0a4" 00:09:51.906 ], 00:09:51.906 "product_name": "Malloc disk", 00:09:51.906 "block_size": 512, 00:09:51.906 "num_blocks": 65536, 00:09:51.906 "uuid": "8648112a-72ba-4e63-827c-48ef988bb0a4", 00:09:51.906 "assigned_rate_limits": { 00:09:51.906 "rw_ios_per_sec": 0, 00:09:51.906 "rw_mbytes_per_sec": 0, 00:09:51.906 "r_mbytes_per_sec": 0, 00:09:51.906 "w_mbytes_per_sec": 0 00:09:51.906 }, 00:09:51.906 "claimed": true, 00:09:51.906 "claim_type": "exclusive_write", 00:09:51.906 "zoned": false, 00:09:51.906 "supported_io_types": { 00:09:51.906 "read": true, 00:09:51.906 "write": true, 00:09:51.906 "unmap": true, 00:09:51.906 "flush": true, 00:09:51.906 "reset": true, 00:09:51.906 "nvme_admin": false, 00:09:51.906 "nvme_io": false, 00:09:51.906 "nvme_io_md": false, 00:09:51.906 "write_zeroes": true, 00:09:51.906 "zcopy": true, 00:09:51.906 "get_zone_info": false, 00:09:51.906 "zone_management": false, 00:09:51.906 "zone_append": false, 00:09:51.906 "compare": false, 00:09:51.906 "compare_and_write": false, 00:09:51.906 "abort": true, 00:09:51.906 "seek_hole": false, 00:09:51.906 "seek_data": false, 00:09:51.906 "copy": true, 00:09:51.906 "nvme_iov_md": false 00:09:51.906 }, 00:09:51.906 "memory_domains": [ 00:09:51.906 { 00:09:51.906 "dma_device_id": "system", 00:09:51.906 "dma_device_type": 1 00:09:51.906 }, 00:09:51.906 { 00:09:51.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.906 "dma_device_type": 2 00:09:51.906 } 00:09:51.906 ], 00:09:51.906 "driver_specific": {} 00:09:51.906 } 00:09:51.906 ] 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.906 "name": "Existed_Raid", 00:09:51.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.906 "strip_size_kb": 64, 00:09:51.906 "state": "configuring", 00:09:51.906 "raid_level": "raid0", 00:09:51.906 "superblock": false, 00:09:51.906 "num_base_bdevs": 3, 00:09:51.906 "num_base_bdevs_discovered": 2, 00:09:51.906 "num_base_bdevs_operational": 3, 00:09:51.906 "base_bdevs_list": [ 00:09:51.906 { 00:09:51.906 "name": "BaseBdev1", 00:09:51.906 "uuid": "bfb664cc-d058-4f1a-9627-09f6cdd8dd2f", 00:09:51.906 "is_configured": true, 00:09:51.906 "data_offset": 0, 00:09:51.906 "data_size": 65536 00:09:51.906 }, 00:09:51.906 { 00:09:51.906 "name": "BaseBdev2", 00:09:51.906 "uuid": "8648112a-72ba-4e63-827c-48ef988bb0a4", 00:09:51.906 "is_configured": true, 00:09:51.906 "data_offset": 0, 00:09:51.906 "data_size": 65536 00:09:51.906 }, 00:09:51.906 { 00:09:51.906 "name": "BaseBdev3", 00:09:51.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.906 "is_configured": false, 00:09:51.906 "data_offset": 0, 00:09:51.906 "data_size": 0 00:09:51.906 } 00:09:51.906 ] 00:09:51.906 }' 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.906 14:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.474 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:52.474 14:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.474 14:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.474 [2024-11-20 14:20:31.273184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.474 [2024-11-20 14:20:31.273255] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:52.474 [2024-11-20 14:20:31.273277] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:52.474 [2024-11-20 14:20:31.273621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:52.474 [2024-11-20 14:20:31.273856] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:52.474 [2024-11-20 14:20:31.273874] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:52.474 [2024-11-20 14:20:31.274213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.474 BaseBdev3 00:09:52.474 14:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.474 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:52.474 14:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:52.474 14:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:52.474 14:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:52.474 14:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:52.474 14:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:52.474 14:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:52.474 14:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.474 14:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.474 14:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.474 14:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:52.474 14:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.474 14:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.474 [ 00:09:52.474 { 00:09:52.474 "name": "BaseBdev3", 00:09:52.474 "aliases": [ 00:09:52.474 "2e9f5bac-d67c-45eb-8f7c-e4597a78201d" 00:09:52.474 ], 00:09:52.474 "product_name": "Malloc disk", 00:09:52.474 "block_size": 512, 00:09:52.474 "num_blocks": 65536, 00:09:52.474 "uuid": "2e9f5bac-d67c-45eb-8f7c-e4597a78201d", 00:09:52.474 "assigned_rate_limits": { 00:09:52.474 "rw_ios_per_sec": 0, 00:09:52.474 "rw_mbytes_per_sec": 0, 00:09:52.474 "r_mbytes_per_sec": 0, 00:09:52.474 "w_mbytes_per_sec": 0 00:09:52.474 }, 00:09:52.474 "claimed": true, 00:09:52.474 "claim_type": "exclusive_write", 00:09:52.474 "zoned": false, 00:09:52.474 "supported_io_types": { 00:09:52.474 "read": true, 00:09:52.474 "write": true, 00:09:52.474 "unmap": true, 00:09:52.474 "flush": true, 00:09:52.474 "reset": true, 00:09:52.474 "nvme_admin": false, 00:09:52.474 "nvme_io": false, 00:09:52.474 "nvme_io_md": false, 00:09:52.475 "write_zeroes": true, 00:09:52.475 "zcopy": true, 00:09:52.475 "get_zone_info": false, 00:09:52.475 "zone_management": false, 00:09:52.475 "zone_append": false, 00:09:52.475 "compare": false, 00:09:52.475 "compare_and_write": false, 00:09:52.475 "abort": true, 00:09:52.475 "seek_hole": false, 00:09:52.475 "seek_data": false, 00:09:52.475 "copy": true, 00:09:52.475 "nvme_iov_md": false 00:09:52.475 }, 00:09:52.475 "memory_domains": [ 00:09:52.475 { 00:09:52.475 "dma_device_id": "system", 00:09:52.475 "dma_device_type": 1 00:09:52.475 }, 00:09:52.475 { 00:09:52.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.475 "dma_device_type": 2 00:09:52.475 } 00:09:52.475 ], 00:09:52.475 "driver_specific": {} 00:09:52.475 } 00:09:52.475 ] 00:09:52.475 14:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.475 14:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:52.475 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:52.475 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:52.475 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:52.475 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.475 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.475 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.475 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.475 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.475 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.475 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.475 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.475 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.475 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.475 14:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.475 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.475 14:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.475 14:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.475 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.475 "name": "Existed_Raid", 00:09:52.475 "uuid": "a2764b19-88f4-4a69-83f3-d5d82dd9c9e9", 00:09:52.475 "strip_size_kb": 64, 00:09:52.475 "state": "online", 00:09:52.475 "raid_level": "raid0", 00:09:52.475 "superblock": false, 00:09:52.475 "num_base_bdevs": 3, 00:09:52.475 "num_base_bdevs_discovered": 3, 00:09:52.475 "num_base_bdevs_operational": 3, 00:09:52.475 "base_bdevs_list": [ 00:09:52.475 { 00:09:52.475 "name": "BaseBdev1", 00:09:52.475 "uuid": "bfb664cc-d058-4f1a-9627-09f6cdd8dd2f", 00:09:52.475 "is_configured": true, 00:09:52.475 "data_offset": 0, 00:09:52.475 "data_size": 65536 00:09:52.475 }, 00:09:52.475 { 00:09:52.475 "name": "BaseBdev2", 00:09:52.475 "uuid": "8648112a-72ba-4e63-827c-48ef988bb0a4", 00:09:52.475 "is_configured": true, 00:09:52.475 "data_offset": 0, 00:09:52.475 "data_size": 65536 00:09:52.475 }, 00:09:52.475 { 00:09:52.475 "name": "BaseBdev3", 00:09:52.475 "uuid": "2e9f5bac-d67c-45eb-8f7c-e4597a78201d", 00:09:52.475 "is_configured": true, 00:09:52.475 "data_offset": 0, 00:09:52.475 "data_size": 65536 00:09:52.475 } 00:09:52.475 ] 00:09:52.475 }' 00:09:52.475 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.475 14:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.041 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:53.041 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:53.041 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:53.041 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:53.041 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:53.041 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:53.041 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:53.041 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:53.041 14:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.041 14:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.041 [2024-11-20 14:20:31.821706] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.041 14:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.041 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:53.041 "name": "Existed_Raid", 00:09:53.041 "aliases": [ 00:09:53.041 "a2764b19-88f4-4a69-83f3-d5d82dd9c9e9" 00:09:53.041 ], 00:09:53.041 "product_name": "Raid Volume", 00:09:53.041 "block_size": 512, 00:09:53.041 "num_blocks": 196608, 00:09:53.041 "uuid": "a2764b19-88f4-4a69-83f3-d5d82dd9c9e9", 00:09:53.041 "assigned_rate_limits": { 00:09:53.042 "rw_ios_per_sec": 0, 00:09:53.042 "rw_mbytes_per_sec": 0, 00:09:53.042 "r_mbytes_per_sec": 0, 00:09:53.042 "w_mbytes_per_sec": 0 00:09:53.042 }, 00:09:53.042 "claimed": false, 00:09:53.042 "zoned": false, 00:09:53.042 "supported_io_types": { 00:09:53.042 "read": true, 00:09:53.042 "write": true, 00:09:53.042 "unmap": true, 00:09:53.042 "flush": true, 00:09:53.042 "reset": true, 00:09:53.042 "nvme_admin": false, 00:09:53.042 "nvme_io": false, 00:09:53.042 "nvme_io_md": false, 00:09:53.042 "write_zeroes": true, 00:09:53.042 "zcopy": false, 00:09:53.042 "get_zone_info": false, 00:09:53.042 "zone_management": false, 00:09:53.042 "zone_append": false, 00:09:53.042 "compare": false, 00:09:53.042 "compare_and_write": false, 00:09:53.042 "abort": false, 00:09:53.042 "seek_hole": false, 00:09:53.042 "seek_data": false, 00:09:53.042 "copy": false, 00:09:53.042 "nvme_iov_md": false 00:09:53.042 }, 00:09:53.042 "memory_domains": [ 00:09:53.042 { 00:09:53.042 "dma_device_id": "system", 00:09:53.042 "dma_device_type": 1 00:09:53.042 }, 00:09:53.042 { 00:09:53.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.042 "dma_device_type": 2 00:09:53.042 }, 00:09:53.042 { 00:09:53.042 "dma_device_id": "system", 00:09:53.042 "dma_device_type": 1 00:09:53.042 }, 00:09:53.042 { 00:09:53.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.042 "dma_device_type": 2 00:09:53.042 }, 00:09:53.042 { 00:09:53.042 "dma_device_id": "system", 00:09:53.042 "dma_device_type": 1 00:09:53.042 }, 00:09:53.042 { 00:09:53.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.042 "dma_device_type": 2 00:09:53.042 } 00:09:53.042 ], 00:09:53.042 "driver_specific": { 00:09:53.042 "raid": { 00:09:53.042 "uuid": "a2764b19-88f4-4a69-83f3-d5d82dd9c9e9", 00:09:53.042 "strip_size_kb": 64, 00:09:53.042 "state": "online", 00:09:53.042 "raid_level": "raid0", 00:09:53.042 "superblock": false, 00:09:53.042 "num_base_bdevs": 3, 00:09:53.042 "num_base_bdevs_discovered": 3, 00:09:53.042 "num_base_bdevs_operational": 3, 00:09:53.042 "base_bdevs_list": [ 00:09:53.042 { 00:09:53.042 "name": "BaseBdev1", 00:09:53.042 "uuid": "bfb664cc-d058-4f1a-9627-09f6cdd8dd2f", 00:09:53.042 "is_configured": true, 00:09:53.042 "data_offset": 0, 00:09:53.042 "data_size": 65536 00:09:53.042 }, 00:09:53.042 { 00:09:53.042 "name": "BaseBdev2", 00:09:53.042 "uuid": "8648112a-72ba-4e63-827c-48ef988bb0a4", 00:09:53.042 "is_configured": true, 00:09:53.042 "data_offset": 0, 00:09:53.042 "data_size": 65536 00:09:53.042 }, 00:09:53.042 { 00:09:53.042 "name": "BaseBdev3", 00:09:53.042 "uuid": "2e9f5bac-d67c-45eb-8f7c-e4597a78201d", 00:09:53.042 "is_configured": true, 00:09:53.042 "data_offset": 0, 00:09:53.042 "data_size": 65536 00:09:53.042 } 00:09:53.042 ] 00:09:53.042 } 00:09:53.042 } 00:09:53.042 }' 00:09:53.042 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:53.042 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:53.042 BaseBdev2 00:09:53.042 BaseBdev3' 00:09:53.042 14:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.300 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:53.300 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.300 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.300 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:53.300 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.300 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.300 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.300 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.300 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.300 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.300 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:53.300 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.301 [2024-11-20 14:20:32.193466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:53.301 [2024-11-20 14:20:32.193500] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.301 [2024-11-20 14:20:32.193570] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.301 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.560 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.560 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.560 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.560 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.560 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.560 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.560 "name": "Existed_Raid", 00:09:53.560 "uuid": "a2764b19-88f4-4a69-83f3-d5d82dd9c9e9", 00:09:53.560 "strip_size_kb": 64, 00:09:53.560 "state": "offline", 00:09:53.560 "raid_level": "raid0", 00:09:53.560 "superblock": false, 00:09:53.560 "num_base_bdevs": 3, 00:09:53.560 "num_base_bdevs_discovered": 2, 00:09:53.560 "num_base_bdevs_operational": 2, 00:09:53.560 "base_bdevs_list": [ 00:09:53.560 { 00:09:53.560 "name": null, 00:09:53.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.560 "is_configured": false, 00:09:53.560 "data_offset": 0, 00:09:53.560 "data_size": 65536 00:09:53.560 }, 00:09:53.560 { 00:09:53.560 "name": "BaseBdev2", 00:09:53.560 "uuid": "8648112a-72ba-4e63-827c-48ef988bb0a4", 00:09:53.560 "is_configured": true, 00:09:53.560 "data_offset": 0, 00:09:53.560 "data_size": 65536 00:09:53.560 }, 00:09:53.560 { 00:09:53.560 "name": "BaseBdev3", 00:09:53.560 "uuid": "2e9f5bac-d67c-45eb-8f7c-e4597a78201d", 00:09:53.560 "is_configured": true, 00:09:53.560 "data_offset": 0, 00:09:53.560 "data_size": 65536 00:09:53.560 } 00:09:53.560 ] 00:09:53.560 }' 00:09:53.560 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.560 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.818 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:53.818 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:54.077 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.077 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.077 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:54.077 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.077 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.077 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:54.077 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:54.077 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:54.077 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.077 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.077 [2024-11-20 14:20:32.857831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:54.077 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.077 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:54.077 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:54.077 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.077 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:54.077 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.077 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.077 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.077 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:54.077 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:54.077 14:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:54.077 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.077 14:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.077 [2024-11-20 14:20:33.003265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:54.077 [2024-11-20 14:20:33.003331] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.336 BaseBdev2 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.336 [ 00:09:54.336 { 00:09:54.336 "name": "BaseBdev2", 00:09:54.336 "aliases": [ 00:09:54.336 "f383b582-3c16-4ae0-bceb-4c28dccd3c6e" 00:09:54.336 ], 00:09:54.336 "product_name": "Malloc disk", 00:09:54.336 "block_size": 512, 00:09:54.336 "num_blocks": 65536, 00:09:54.336 "uuid": "f383b582-3c16-4ae0-bceb-4c28dccd3c6e", 00:09:54.336 "assigned_rate_limits": { 00:09:54.336 "rw_ios_per_sec": 0, 00:09:54.336 "rw_mbytes_per_sec": 0, 00:09:54.336 "r_mbytes_per_sec": 0, 00:09:54.336 "w_mbytes_per_sec": 0 00:09:54.336 }, 00:09:54.336 "claimed": false, 00:09:54.336 "zoned": false, 00:09:54.336 "supported_io_types": { 00:09:54.336 "read": true, 00:09:54.336 "write": true, 00:09:54.336 "unmap": true, 00:09:54.336 "flush": true, 00:09:54.336 "reset": true, 00:09:54.336 "nvme_admin": false, 00:09:54.336 "nvme_io": false, 00:09:54.336 "nvme_io_md": false, 00:09:54.336 "write_zeroes": true, 00:09:54.336 "zcopy": true, 00:09:54.336 "get_zone_info": false, 00:09:54.336 "zone_management": false, 00:09:54.336 "zone_append": false, 00:09:54.336 "compare": false, 00:09:54.336 "compare_and_write": false, 00:09:54.336 "abort": true, 00:09:54.336 "seek_hole": false, 00:09:54.336 "seek_data": false, 00:09:54.336 "copy": true, 00:09:54.336 "nvme_iov_md": false 00:09:54.336 }, 00:09:54.336 "memory_domains": [ 00:09:54.336 { 00:09:54.336 "dma_device_id": "system", 00:09:54.336 "dma_device_type": 1 00:09:54.336 }, 00:09:54.336 { 00:09:54.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.336 "dma_device_type": 2 00:09:54.336 } 00:09:54.336 ], 00:09:54.336 "driver_specific": {} 00:09:54.336 } 00:09:54.336 ] 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.336 BaseBdev3 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.336 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.336 [ 00:09:54.336 { 00:09:54.337 "name": "BaseBdev3", 00:09:54.337 "aliases": [ 00:09:54.337 "425dd753-7234-4640-b8a1-1e7ed3ad1c7f" 00:09:54.337 ], 00:09:54.337 "product_name": "Malloc disk", 00:09:54.337 "block_size": 512, 00:09:54.337 "num_blocks": 65536, 00:09:54.337 "uuid": "425dd753-7234-4640-b8a1-1e7ed3ad1c7f", 00:09:54.337 "assigned_rate_limits": { 00:09:54.337 "rw_ios_per_sec": 0, 00:09:54.337 "rw_mbytes_per_sec": 0, 00:09:54.337 "r_mbytes_per_sec": 0, 00:09:54.337 "w_mbytes_per_sec": 0 00:09:54.337 }, 00:09:54.337 "claimed": false, 00:09:54.337 "zoned": false, 00:09:54.337 "supported_io_types": { 00:09:54.337 "read": true, 00:09:54.337 "write": true, 00:09:54.337 "unmap": true, 00:09:54.337 "flush": true, 00:09:54.337 "reset": true, 00:09:54.337 "nvme_admin": false, 00:09:54.337 "nvme_io": false, 00:09:54.337 "nvme_io_md": false, 00:09:54.337 "write_zeroes": true, 00:09:54.337 "zcopy": true, 00:09:54.337 "get_zone_info": false, 00:09:54.337 "zone_management": false, 00:09:54.337 "zone_append": false, 00:09:54.337 "compare": false, 00:09:54.337 "compare_and_write": false, 00:09:54.337 "abort": true, 00:09:54.337 "seek_hole": false, 00:09:54.337 "seek_data": false, 00:09:54.337 "copy": true, 00:09:54.337 "nvme_iov_md": false 00:09:54.337 }, 00:09:54.337 "memory_domains": [ 00:09:54.337 { 00:09:54.337 "dma_device_id": "system", 00:09:54.337 "dma_device_type": 1 00:09:54.337 }, 00:09:54.337 { 00:09:54.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.337 "dma_device_type": 2 00:09:54.337 } 00:09:54.337 ], 00:09:54.337 "driver_specific": {} 00:09:54.337 } 00:09:54.337 ] 00:09:54.337 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.337 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:54.337 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:54.337 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:54.337 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:54.337 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.337 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.337 [2024-11-20 14:20:33.295567] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:54.337 [2024-11-20 14:20:33.295781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:54.337 [2024-11-20 14:20:33.295828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:54.337 [2024-11-20 14:20:33.298213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:54.337 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.337 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:54.337 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.337 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.337 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.337 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.337 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.337 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.337 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.337 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.337 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.337 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.337 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.337 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.337 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.596 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.596 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.596 "name": "Existed_Raid", 00:09:54.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.596 "strip_size_kb": 64, 00:09:54.596 "state": "configuring", 00:09:54.596 "raid_level": "raid0", 00:09:54.596 "superblock": false, 00:09:54.596 "num_base_bdevs": 3, 00:09:54.596 "num_base_bdevs_discovered": 2, 00:09:54.596 "num_base_bdevs_operational": 3, 00:09:54.596 "base_bdevs_list": [ 00:09:54.596 { 00:09:54.596 "name": "BaseBdev1", 00:09:54.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.596 "is_configured": false, 00:09:54.596 "data_offset": 0, 00:09:54.596 "data_size": 0 00:09:54.596 }, 00:09:54.596 { 00:09:54.596 "name": "BaseBdev2", 00:09:54.596 "uuid": "f383b582-3c16-4ae0-bceb-4c28dccd3c6e", 00:09:54.596 "is_configured": true, 00:09:54.596 "data_offset": 0, 00:09:54.596 "data_size": 65536 00:09:54.596 }, 00:09:54.596 { 00:09:54.596 "name": "BaseBdev3", 00:09:54.596 "uuid": "425dd753-7234-4640-b8a1-1e7ed3ad1c7f", 00:09:54.596 "is_configured": true, 00:09:54.596 "data_offset": 0, 00:09:54.596 "data_size": 65536 00:09:54.596 } 00:09:54.596 ] 00:09:54.596 }' 00:09:54.596 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.596 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.855 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:54.855 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.855 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.855 [2024-11-20 14:20:33.823779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:54.855 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.855 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:54.855 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.855 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.855 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.855 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.855 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.855 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.855 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.855 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.855 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.855 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.855 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.855 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.855 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.114 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.114 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.114 "name": "Existed_Raid", 00:09:55.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.114 "strip_size_kb": 64, 00:09:55.114 "state": "configuring", 00:09:55.114 "raid_level": "raid0", 00:09:55.114 "superblock": false, 00:09:55.114 "num_base_bdevs": 3, 00:09:55.114 "num_base_bdevs_discovered": 1, 00:09:55.114 "num_base_bdevs_operational": 3, 00:09:55.114 "base_bdevs_list": [ 00:09:55.114 { 00:09:55.114 "name": "BaseBdev1", 00:09:55.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.114 "is_configured": false, 00:09:55.114 "data_offset": 0, 00:09:55.114 "data_size": 0 00:09:55.114 }, 00:09:55.114 { 00:09:55.114 "name": null, 00:09:55.114 "uuid": "f383b582-3c16-4ae0-bceb-4c28dccd3c6e", 00:09:55.114 "is_configured": false, 00:09:55.114 "data_offset": 0, 00:09:55.114 "data_size": 65536 00:09:55.114 }, 00:09:55.114 { 00:09:55.114 "name": "BaseBdev3", 00:09:55.114 "uuid": "425dd753-7234-4640-b8a1-1e7ed3ad1c7f", 00:09:55.114 "is_configured": true, 00:09:55.114 "data_offset": 0, 00:09:55.114 "data_size": 65536 00:09:55.114 } 00:09:55.114 ] 00:09:55.114 }' 00:09:55.114 14:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.114 14:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.373 14:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.373 14:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:55.373 14:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.373 14:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.632 [2024-11-20 14:20:34.441858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.632 BaseBdev1 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.632 [ 00:09:55.632 { 00:09:55.632 "name": "BaseBdev1", 00:09:55.632 "aliases": [ 00:09:55.632 "c1198228-84de-4aff-b7a9-a51bd6cd8330" 00:09:55.632 ], 00:09:55.632 "product_name": "Malloc disk", 00:09:55.632 "block_size": 512, 00:09:55.632 "num_blocks": 65536, 00:09:55.632 "uuid": "c1198228-84de-4aff-b7a9-a51bd6cd8330", 00:09:55.632 "assigned_rate_limits": { 00:09:55.632 "rw_ios_per_sec": 0, 00:09:55.632 "rw_mbytes_per_sec": 0, 00:09:55.632 "r_mbytes_per_sec": 0, 00:09:55.632 "w_mbytes_per_sec": 0 00:09:55.632 }, 00:09:55.632 "claimed": true, 00:09:55.632 "claim_type": "exclusive_write", 00:09:55.632 "zoned": false, 00:09:55.632 "supported_io_types": { 00:09:55.632 "read": true, 00:09:55.632 "write": true, 00:09:55.632 "unmap": true, 00:09:55.632 "flush": true, 00:09:55.632 "reset": true, 00:09:55.632 "nvme_admin": false, 00:09:55.632 "nvme_io": false, 00:09:55.632 "nvme_io_md": false, 00:09:55.632 "write_zeroes": true, 00:09:55.632 "zcopy": true, 00:09:55.632 "get_zone_info": false, 00:09:55.632 "zone_management": false, 00:09:55.632 "zone_append": false, 00:09:55.632 "compare": false, 00:09:55.632 "compare_and_write": false, 00:09:55.632 "abort": true, 00:09:55.632 "seek_hole": false, 00:09:55.632 "seek_data": false, 00:09:55.632 "copy": true, 00:09:55.632 "nvme_iov_md": false 00:09:55.632 }, 00:09:55.632 "memory_domains": [ 00:09:55.632 { 00:09:55.632 "dma_device_id": "system", 00:09:55.632 "dma_device_type": 1 00:09:55.632 }, 00:09:55.632 { 00:09:55.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.632 "dma_device_type": 2 00:09:55.632 } 00:09:55.632 ], 00:09:55.632 "driver_specific": {} 00:09:55.632 } 00:09:55.632 ] 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.632 14:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.632 "name": "Existed_Raid", 00:09:55.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.632 "strip_size_kb": 64, 00:09:55.632 "state": "configuring", 00:09:55.632 "raid_level": "raid0", 00:09:55.632 "superblock": false, 00:09:55.633 "num_base_bdevs": 3, 00:09:55.633 "num_base_bdevs_discovered": 2, 00:09:55.633 "num_base_bdevs_operational": 3, 00:09:55.633 "base_bdevs_list": [ 00:09:55.633 { 00:09:55.633 "name": "BaseBdev1", 00:09:55.633 "uuid": "c1198228-84de-4aff-b7a9-a51bd6cd8330", 00:09:55.633 "is_configured": true, 00:09:55.633 "data_offset": 0, 00:09:55.633 "data_size": 65536 00:09:55.633 }, 00:09:55.633 { 00:09:55.633 "name": null, 00:09:55.633 "uuid": "f383b582-3c16-4ae0-bceb-4c28dccd3c6e", 00:09:55.633 "is_configured": false, 00:09:55.633 "data_offset": 0, 00:09:55.633 "data_size": 65536 00:09:55.633 }, 00:09:55.633 { 00:09:55.633 "name": "BaseBdev3", 00:09:55.633 "uuid": "425dd753-7234-4640-b8a1-1e7ed3ad1c7f", 00:09:55.633 "is_configured": true, 00:09:55.633 "data_offset": 0, 00:09:55.633 "data_size": 65536 00:09:55.633 } 00:09:55.633 ] 00:09:55.633 }' 00:09:55.633 14:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.633 14:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.237 [2024-11-20 14:20:35.062125] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.237 "name": "Existed_Raid", 00:09:56.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.237 "strip_size_kb": 64, 00:09:56.237 "state": "configuring", 00:09:56.237 "raid_level": "raid0", 00:09:56.237 "superblock": false, 00:09:56.237 "num_base_bdevs": 3, 00:09:56.237 "num_base_bdevs_discovered": 1, 00:09:56.237 "num_base_bdevs_operational": 3, 00:09:56.237 "base_bdevs_list": [ 00:09:56.237 { 00:09:56.237 "name": "BaseBdev1", 00:09:56.237 "uuid": "c1198228-84de-4aff-b7a9-a51bd6cd8330", 00:09:56.237 "is_configured": true, 00:09:56.237 "data_offset": 0, 00:09:56.237 "data_size": 65536 00:09:56.237 }, 00:09:56.237 { 00:09:56.237 "name": null, 00:09:56.237 "uuid": "f383b582-3c16-4ae0-bceb-4c28dccd3c6e", 00:09:56.237 "is_configured": false, 00:09:56.237 "data_offset": 0, 00:09:56.237 "data_size": 65536 00:09:56.237 }, 00:09:56.237 { 00:09:56.237 "name": null, 00:09:56.237 "uuid": "425dd753-7234-4640-b8a1-1e7ed3ad1c7f", 00:09:56.237 "is_configured": false, 00:09:56.237 "data_offset": 0, 00:09:56.237 "data_size": 65536 00:09:56.237 } 00:09:56.237 ] 00:09:56.237 }' 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.237 14:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.804 [2024-11-20 14:20:35.630300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.804 "name": "Existed_Raid", 00:09:56.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.804 "strip_size_kb": 64, 00:09:56.804 "state": "configuring", 00:09:56.804 "raid_level": "raid0", 00:09:56.804 "superblock": false, 00:09:56.804 "num_base_bdevs": 3, 00:09:56.804 "num_base_bdevs_discovered": 2, 00:09:56.804 "num_base_bdevs_operational": 3, 00:09:56.804 "base_bdevs_list": [ 00:09:56.804 { 00:09:56.804 "name": "BaseBdev1", 00:09:56.804 "uuid": "c1198228-84de-4aff-b7a9-a51bd6cd8330", 00:09:56.804 "is_configured": true, 00:09:56.804 "data_offset": 0, 00:09:56.804 "data_size": 65536 00:09:56.804 }, 00:09:56.804 { 00:09:56.804 "name": null, 00:09:56.804 "uuid": "f383b582-3c16-4ae0-bceb-4c28dccd3c6e", 00:09:56.804 "is_configured": false, 00:09:56.804 "data_offset": 0, 00:09:56.804 "data_size": 65536 00:09:56.804 }, 00:09:56.804 { 00:09:56.804 "name": "BaseBdev3", 00:09:56.804 "uuid": "425dd753-7234-4640-b8a1-1e7ed3ad1c7f", 00:09:56.804 "is_configured": true, 00:09:56.804 "data_offset": 0, 00:09:56.804 "data_size": 65536 00:09:56.804 } 00:09:56.804 ] 00:09:56.804 }' 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.804 14:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.370 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.370 14:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.370 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:57.370 14:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.370 14:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.370 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:57.370 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:57.370 14:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.370 14:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.370 [2024-11-20 14:20:36.206811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:57.370 14:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.370 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:57.370 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.370 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.370 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.370 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.371 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.371 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.371 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.371 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.371 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.371 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.371 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.371 14:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.371 14:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.371 14:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.371 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.371 "name": "Existed_Raid", 00:09:57.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.371 "strip_size_kb": 64, 00:09:57.371 "state": "configuring", 00:09:57.371 "raid_level": "raid0", 00:09:57.371 "superblock": false, 00:09:57.371 "num_base_bdevs": 3, 00:09:57.371 "num_base_bdevs_discovered": 1, 00:09:57.371 "num_base_bdevs_operational": 3, 00:09:57.371 "base_bdevs_list": [ 00:09:57.371 { 00:09:57.371 "name": null, 00:09:57.371 "uuid": "c1198228-84de-4aff-b7a9-a51bd6cd8330", 00:09:57.371 "is_configured": false, 00:09:57.371 "data_offset": 0, 00:09:57.371 "data_size": 65536 00:09:57.371 }, 00:09:57.371 { 00:09:57.371 "name": null, 00:09:57.371 "uuid": "f383b582-3c16-4ae0-bceb-4c28dccd3c6e", 00:09:57.371 "is_configured": false, 00:09:57.371 "data_offset": 0, 00:09:57.371 "data_size": 65536 00:09:57.371 }, 00:09:57.371 { 00:09:57.371 "name": "BaseBdev3", 00:09:57.371 "uuid": "425dd753-7234-4640-b8a1-1e7ed3ad1c7f", 00:09:57.371 "is_configured": true, 00:09:57.371 "data_offset": 0, 00:09:57.371 "data_size": 65536 00:09:57.371 } 00:09:57.371 ] 00:09:57.371 }' 00:09:57.371 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.371 14:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.936 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:57.936 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.936 14:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.936 14:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.936 14:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.936 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:57.936 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:57.936 14:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.936 14:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.936 [2024-11-20 14:20:36.864689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.936 14:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.936 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:57.937 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.937 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.937 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.937 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.937 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.937 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.937 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.937 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.937 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.937 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.937 14:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.937 14:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.937 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.937 14:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.195 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.195 "name": "Existed_Raid", 00:09:58.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.195 "strip_size_kb": 64, 00:09:58.195 "state": "configuring", 00:09:58.195 "raid_level": "raid0", 00:09:58.195 "superblock": false, 00:09:58.195 "num_base_bdevs": 3, 00:09:58.195 "num_base_bdevs_discovered": 2, 00:09:58.195 "num_base_bdevs_operational": 3, 00:09:58.195 "base_bdevs_list": [ 00:09:58.195 { 00:09:58.195 "name": null, 00:09:58.195 "uuid": "c1198228-84de-4aff-b7a9-a51bd6cd8330", 00:09:58.195 "is_configured": false, 00:09:58.195 "data_offset": 0, 00:09:58.195 "data_size": 65536 00:09:58.195 }, 00:09:58.195 { 00:09:58.195 "name": "BaseBdev2", 00:09:58.195 "uuid": "f383b582-3c16-4ae0-bceb-4c28dccd3c6e", 00:09:58.195 "is_configured": true, 00:09:58.195 "data_offset": 0, 00:09:58.195 "data_size": 65536 00:09:58.195 }, 00:09:58.195 { 00:09:58.195 "name": "BaseBdev3", 00:09:58.195 "uuid": "425dd753-7234-4640-b8a1-1e7ed3ad1c7f", 00:09:58.195 "is_configured": true, 00:09:58.195 "data_offset": 0, 00:09:58.195 "data_size": 65536 00:09:58.195 } 00:09:58.195 ] 00:09:58.195 }' 00:09:58.195 14:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.195 14:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.453 14:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.453 14:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:58.453 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.453 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.453 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c1198228-84de-4aff-b7a9-a51bd6cd8330 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.711 [2024-11-20 14:20:37.526638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:58.711 [2024-11-20 14:20:37.526687] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:58.711 [2024-11-20 14:20:37.526703] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:58.711 [2024-11-20 14:20:37.527090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:58.711 [2024-11-20 14:20:37.527299] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:58.711 [2024-11-20 14:20:37.527316] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:58.711 [2024-11-20 14:20:37.527639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.711 NewBaseBdev 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.711 [ 00:09:58.711 { 00:09:58.711 "name": "NewBaseBdev", 00:09:58.711 "aliases": [ 00:09:58.711 "c1198228-84de-4aff-b7a9-a51bd6cd8330" 00:09:58.711 ], 00:09:58.711 "product_name": "Malloc disk", 00:09:58.711 "block_size": 512, 00:09:58.711 "num_blocks": 65536, 00:09:58.711 "uuid": "c1198228-84de-4aff-b7a9-a51bd6cd8330", 00:09:58.711 "assigned_rate_limits": { 00:09:58.711 "rw_ios_per_sec": 0, 00:09:58.711 "rw_mbytes_per_sec": 0, 00:09:58.711 "r_mbytes_per_sec": 0, 00:09:58.711 "w_mbytes_per_sec": 0 00:09:58.711 }, 00:09:58.711 "claimed": true, 00:09:58.711 "claim_type": "exclusive_write", 00:09:58.711 "zoned": false, 00:09:58.711 "supported_io_types": { 00:09:58.711 "read": true, 00:09:58.711 "write": true, 00:09:58.711 "unmap": true, 00:09:58.711 "flush": true, 00:09:58.711 "reset": true, 00:09:58.711 "nvme_admin": false, 00:09:58.711 "nvme_io": false, 00:09:58.711 "nvme_io_md": false, 00:09:58.711 "write_zeroes": true, 00:09:58.711 "zcopy": true, 00:09:58.711 "get_zone_info": false, 00:09:58.711 "zone_management": false, 00:09:58.711 "zone_append": false, 00:09:58.711 "compare": false, 00:09:58.711 "compare_and_write": false, 00:09:58.711 "abort": true, 00:09:58.711 "seek_hole": false, 00:09:58.711 "seek_data": false, 00:09:58.711 "copy": true, 00:09:58.711 "nvme_iov_md": false 00:09:58.711 }, 00:09:58.711 "memory_domains": [ 00:09:58.711 { 00:09:58.711 "dma_device_id": "system", 00:09:58.711 "dma_device_type": 1 00:09:58.711 }, 00:09:58.711 { 00:09:58.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.711 "dma_device_type": 2 00:09:58.711 } 00:09:58.711 ], 00:09:58.711 "driver_specific": {} 00:09:58.711 } 00:09:58.711 ] 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.711 "name": "Existed_Raid", 00:09:58.711 "uuid": "0d9e026f-9db5-46f9-8feb-07e3e6eaeb79", 00:09:58.711 "strip_size_kb": 64, 00:09:58.711 "state": "online", 00:09:58.711 "raid_level": "raid0", 00:09:58.711 "superblock": false, 00:09:58.711 "num_base_bdevs": 3, 00:09:58.711 "num_base_bdevs_discovered": 3, 00:09:58.711 "num_base_bdevs_operational": 3, 00:09:58.711 "base_bdevs_list": [ 00:09:58.711 { 00:09:58.711 "name": "NewBaseBdev", 00:09:58.711 "uuid": "c1198228-84de-4aff-b7a9-a51bd6cd8330", 00:09:58.711 "is_configured": true, 00:09:58.711 "data_offset": 0, 00:09:58.711 "data_size": 65536 00:09:58.711 }, 00:09:58.711 { 00:09:58.711 "name": "BaseBdev2", 00:09:58.711 "uuid": "f383b582-3c16-4ae0-bceb-4c28dccd3c6e", 00:09:58.711 "is_configured": true, 00:09:58.711 "data_offset": 0, 00:09:58.711 "data_size": 65536 00:09:58.711 }, 00:09:58.711 { 00:09:58.711 "name": "BaseBdev3", 00:09:58.711 "uuid": "425dd753-7234-4640-b8a1-1e7ed3ad1c7f", 00:09:58.711 "is_configured": true, 00:09:58.711 "data_offset": 0, 00:09:58.711 "data_size": 65536 00:09:58.711 } 00:09:58.711 ] 00:09:58.711 }' 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.711 14:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.278 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:59.279 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:59.279 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:59.279 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:59.279 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:59.279 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:59.279 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:59.279 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:59.279 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.279 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.279 [2024-11-20 14:20:38.095252] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.279 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.279 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:59.279 "name": "Existed_Raid", 00:09:59.279 "aliases": [ 00:09:59.279 "0d9e026f-9db5-46f9-8feb-07e3e6eaeb79" 00:09:59.279 ], 00:09:59.279 "product_name": "Raid Volume", 00:09:59.279 "block_size": 512, 00:09:59.279 "num_blocks": 196608, 00:09:59.279 "uuid": "0d9e026f-9db5-46f9-8feb-07e3e6eaeb79", 00:09:59.279 "assigned_rate_limits": { 00:09:59.279 "rw_ios_per_sec": 0, 00:09:59.279 "rw_mbytes_per_sec": 0, 00:09:59.279 "r_mbytes_per_sec": 0, 00:09:59.279 "w_mbytes_per_sec": 0 00:09:59.279 }, 00:09:59.279 "claimed": false, 00:09:59.279 "zoned": false, 00:09:59.279 "supported_io_types": { 00:09:59.279 "read": true, 00:09:59.279 "write": true, 00:09:59.279 "unmap": true, 00:09:59.279 "flush": true, 00:09:59.279 "reset": true, 00:09:59.279 "nvme_admin": false, 00:09:59.279 "nvme_io": false, 00:09:59.279 "nvme_io_md": false, 00:09:59.279 "write_zeroes": true, 00:09:59.279 "zcopy": false, 00:09:59.279 "get_zone_info": false, 00:09:59.279 "zone_management": false, 00:09:59.279 "zone_append": false, 00:09:59.279 "compare": false, 00:09:59.279 "compare_and_write": false, 00:09:59.279 "abort": false, 00:09:59.279 "seek_hole": false, 00:09:59.279 "seek_data": false, 00:09:59.279 "copy": false, 00:09:59.279 "nvme_iov_md": false 00:09:59.279 }, 00:09:59.279 "memory_domains": [ 00:09:59.279 { 00:09:59.279 "dma_device_id": "system", 00:09:59.279 "dma_device_type": 1 00:09:59.279 }, 00:09:59.279 { 00:09:59.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.279 "dma_device_type": 2 00:09:59.279 }, 00:09:59.279 { 00:09:59.279 "dma_device_id": "system", 00:09:59.279 "dma_device_type": 1 00:09:59.279 }, 00:09:59.279 { 00:09:59.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.279 "dma_device_type": 2 00:09:59.279 }, 00:09:59.279 { 00:09:59.279 "dma_device_id": "system", 00:09:59.279 "dma_device_type": 1 00:09:59.279 }, 00:09:59.279 { 00:09:59.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.279 "dma_device_type": 2 00:09:59.279 } 00:09:59.279 ], 00:09:59.279 "driver_specific": { 00:09:59.279 "raid": { 00:09:59.279 "uuid": "0d9e026f-9db5-46f9-8feb-07e3e6eaeb79", 00:09:59.279 "strip_size_kb": 64, 00:09:59.279 "state": "online", 00:09:59.279 "raid_level": "raid0", 00:09:59.279 "superblock": false, 00:09:59.279 "num_base_bdevs": 3, 00:09:59.279 "num_base_bdevs_discovered": 3, 00:09:59.279 "num_base_bdevs_operational": 3, 00:09:59.279 "base_bdevs_list": [ 00:09:59.279 { 00:09:59.279 "name": "NewBaseBdev", 00:09:59.279 "uuid": "c1198228-84de-4aff-b7a9-a51bd6cd8330", 00:09:59.279 "is_configured": true, 00:09:59.279 "data_offset": 0, 00:09:59.279 "data_size": 65536 00:09:59.279 }, 00:09:59.279 { 00:09:59.279 "name": "BaseBdev2", 00:09:59.279 "uuid": "f383b582-3c16-4ae0-bceb-4c28dccd3c6e", 00:09:59.279 "is_configured": true, 00:09:59.279 "data_offset": 0, 00:09:59.279 "data_size": 65536 00:09:59.279 }, 00:09:59.279 { 00:09:59.279 "name": "BaseBdev3", 00:09:59.279 "uuid": "425dd753-7234-4640-b8a1-1e7ed3ad1c7f", 00:09:59.279 "is_configured": true, 00:09:59.279 "data_offset": 0, 00:09:59.279 "data_size": 65536 00:09:59.279 } 00:09:59.279 ] 00:09:59.279 } 00:09:59.279 } 00:09:59.279 }' 00:09:59.279 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:59.279 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:59.279 BaseBdev2 00:09:59.279 BaseBdev3' 00:09:59.279 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.279 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:59.279 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.279 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.279 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:59.279 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.279 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.538 [2024-11-20 14:20:38.402971] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.538 [2024-11-20 14:20:38.403027] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.538 [2024-11-20 14:20:38.403127] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.538 [2024-11-20 14:20:38.403207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:59.538 [2024-11-20 14:20:38.403228] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63812 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63812 ']' 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63812 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63812 00:09:59.538 killing process with pid 63812 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.538 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.539 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63812' 00:09:59.539 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63812 00:09:59.539 [2024-11-20 14:20:38.440276] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:59.539 14:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63812 00:09:59.797 [2024-11-20 14:20:38.710686] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.224 ************************************ 00:10:01.224 END TEST raid_state_function_test 00:10:01.224 ************************************ 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:01.224 00:10:01.224 real 0m11.930s 00:10:01.224 user 0m19.883s 00:10:01.224 sys 0m1.570s 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.224 14:20:39 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:10:01.224 14:20:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:01.224 14:20:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.224 14:20:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:01.224 ************************************ 00:10:01.224 START TEST raid_state_function_test_sb 00:10:01.224 ************************************ 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:01.224 Process raid pid: 64450 00:10:01.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64450 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64450' 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64450 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64450 ']' 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.224 14:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.224 [2024-11-20 14:20:39.941560] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:10:01.224 [2024-11-20 14:20:39.942022] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.224 [2024-11-20 14:20:40.124540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.483 [2024-11-20 14:20:40.253440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.483 [2024-11-20 14:20:40.462895] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.483 [2024-11-20 14:20:40.462958] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.049 14:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.049 14:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:02.049 14:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:02.049 14:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.049 14:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.049 [2024-11-20 14:20:40.895684] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.049 [2024-11-20 14:20:40.895762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.050 [2024-11-20 14:20:40.895795] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.050 [2024-11-20 14:20:40.895811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.050 [2024-11-20 14:20:40.895821] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.050 [2024-11-20 14:20:40.895835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.050 14:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.050 14:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:02.050 14:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.050 14:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.050 14:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.050 14:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.050 14:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.050 14:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.050 14:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.050 14:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.050 14:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.050 14:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.050 14:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.050 14:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.050 14:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.050 14:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.050 14:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.050 "name": "Existed_Raid", 00:10:02.050 "uuid": "4ea7e364-f6b9-461f-8b27-d57a53548621", 00:10:02.050 "strip_size_kb": 64, 00:10:02.050 "state": "configuring", 00:10:02.050 "raid_level": "raid0", 00:10:02.050 "superblock": true, 00:10:02.050 "num_base_bdevs": 3, 00:10:02.050 "num_base_bdevs_discovered": 0, 00:10:02.050 "num_base_bdevs_operational": 3, 00:10:02.050 "base_bdevs_list": [ 00:10:02.050 { 00:10:02.050 "name": "BaseBdev1", 00:10:02.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.050 "is_configured": false, 00:10:02.050 "data_offset": 0, 00:10:02.050 "data_size": 0 00:10:02.050 }, 00:10:02.050 { 00:10:02.050 "name": "BaseBdev2", 00:10:02.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.050 "is_configured": false, 00:10:02.050 "data_offset": 0, 00:10:02.050 "data_size": 0 00:10:02.050 }, 00:10:02.050 { 00:10:02.050 "name": "BaseBdev3", 00:10:02.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.050 "is_configured": false, 00:10:02.050 "data_offset": 0, 00:10:02.050 "data_size": 0 00:10:02.050 } 00:10:02.050 ] 00:10:02.050 }' 00:10:02.050 14:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.050 14:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.617 14:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.617 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.617 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.617 [2024-11-20 14:20:41.423786] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.617 [2024-11-20 14:20:41.423956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:02.617 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.617 14:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:02.617 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.617 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.617 [2024-11-20 14:20:41.435794] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.617 [2024-11-20 14:20:41.435849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.617 [2024-11-20 14:20:41.435865] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.617 [2024-11-20 14:20:41.435881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.618 [2024-11-20 14:20:41.435890] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.618 [2024-11-20 14:20:41.435905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.618 [2024-11-20 14:20:41.481602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.618 BaseBdev1 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.618 [ 00:10:02.618 { 00:10:02.618 "name": "BaseBdev1", 00:10:02.618 "aliases": [ 00:10:02.618 "416718bf-1167-4a16-971d-e33ca9534dd8" 00:10:02.618 ], 00:10:02.618 "product_name": "Malloc disk", 00:10:02.618 "block_size": 512, 00:10:02.618 "num_blocks": 65536, 00:10:02.618 "uuid": "416718bf-1167-4a16-971d-e33ca9534dd8", 00:10:02.618 "assigned_rate_limits": { 00:10:02.618 "rw_ios_per_sec": 0, 00:10:02.618 "rw_mbytes_per_sec": 0, 00:10:02.618 "r_mbytes_per_sec": 0, 00:10:02.618 "w_mbytes_per_sec": 0 00:10:02.618 }, 00:10:02.618 "claimed": true, 00:10:02.618 "claim_type": "exclusive_write", 00:10:02.618 "zoned": false, 00:10:02.618 "supported_io_types": { 00:10:02.618 "read": true, 00:10:02.618 "write": true, 00:10:02.618 "unmap": true, 00:10:02.618 "flush": true, 00:10:02.618 "reset": true, 00:10:02.618 "nvme_admin": false, 00:10:02.618 "nvme_io": false, 00:10:02.618 "nvme_io_md": false, 00:10:02.618 "write_zeroes": true, 00:10:02.618 "zcopy": true, 00:10:02.618 "get_zone_info": false, 00:10:02.618 "zone_management": false, 00:10:02.618 "zone_append": false, 00:10:02.618 "compare": false, 00:10:02.618 "compare_and_write": false, 00:10:02.618 "abort": true, 00:10:02.618 "seek_hole": false, 00:10:02.618 "seek_data": false, 00:10:02.618 "copy": true, 00:10:02.618 "nvme_iov_md": false 00:10:02.618 }, 00:10:02.618 "memory_domains": [ 00:10:02.618 { 00:10:02.618 "dma_device_id": "system", 00:10:02.618 "dma_device_type": 1 00:10:02.618 }, 00:10:02.618 { 00:10:02.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.618 "dma_device_type": 2 00:10:02.618 } 00:10:02.618 ], 00:10:02.618 "driver_specific": {} 00:10:02.618 } 00:10:02.618 ] 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.618 "name": "Existed_Raid", 00:10:02.618 "uuid": "36f9c8c9-7e36-47cd-a0b5-195d76bf2d8f", 00:10:02.618 "strip_size_kb": 64, 00:10:02.618 "state": "configuring", 00:10:02.618 "raid_level": "raid0", 00:10:02.618 "superblock": true, 00:10:02.618 "num_base_bdevs": 3, 00:10:02.618 "num_base_bdevs_discovered": 1, 00:10:02.618 "num_base_bdevs_operational": 3, 00:10:02.618 "base_bdevs_list": [ 00:10:02.618 { 00:10:02.618 "name": "BaseBdev1", 00:10:02.618 "uuid": "416718bf-1167-4a16-971d-e33ca9534dd8", 00:10:02.618 "is_configured": true, 00:10:02.618 "data_offset": 2048, 00:10:02.618 "data_size": 63488 00:10:02.618 }, 00:10:02.618 { 00:10:02.618 "name": "BaseBdev2", 00:10:02.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.618 "is_configured": false, 00:10:02.618 "data_offset": 0, 00:10:02.618 "data_size": 0 00:10:02.618 }, 00:10:02.618 { 00:10:02.618 "name": "BaseBdev3", 00:10:02.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.618 "is_configured": false, 00:10:02.618 "data_offset": 0, 00:10:02.618 "data_size": 0 00:10:02.618 } 00:10:02.618 ] 00:10:02.618 }' 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.618 14:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.186 [2024-11-20 14:20:42.037853] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:03.186 [2024-11-20 14:20:42.037912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.186 [2024-11-20 14:20:42.045887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.186 [2024-11-20 14:20:42.048552] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:03.186 [2024-11-20 14:20:42.048744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:03.186 [2024-11-20 14:20:42.048771] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:03.186 [2024-11-20 14:20:42.048789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.186 "name": "Existed_Raid", 00:10:03.186 "uuid": "b639ac18-5d63-405b-83d3-27598e8b56ed", 00:10:03.186 "strip_size_kb": 64, 00:10:03.186 "state": "configuring", 00:10:03.186 "raid_level": "raid0", 00:10:03.186 "superblock": true, 00:10:03.186 "num_base_bdevs": 3, 00:10:03.186 "num_base_bdevs_discovered": 1, 00:10:03.186 "num_base_bdevs_operational": 3, 00:10:03.186 "base_bdevs_list": [ 00:10:03.186 { 00:10:03.186 "name": "BaseBdev1", 00:10:03.186 "uuid": "416718bf-1167-4a16-971d-e33ca9534dd8", 00:10:03.186 "is_configured": true, 00:10:03.186 "data_offset": 2048, 00:10:03.186 "data_size": 63488 00:10:03.186 }, 00:10:03.186 { 00:10:03.186 "name": "BaseBdev2", 00:10:03.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.186 "is_configured": false, 00:10:03.186 "data_offset": 0, 00:10:03.186 "data_size": 0 00:10:03.186 }, 00:10:03.186 { 00:10:03.186 "name": "BaseBdev3", 00:10:03.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.186 "is_configured": false, 00:10:03.186 "data_offset": 0, 00:10:03.186 "data_size": 0 00:10:03.186 } 00:10:03.186 ] 00:10:03.186 }' 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.186 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.755 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:03.755 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.755 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.755 [2024-11-20 14:20:42.617586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.755 BaseBdev2 00:10:03.755 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.755 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:03.755 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:03.755 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.755 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:03.755 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.755 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.756 [ 00:10:03.756 { 00:10:03.756 "name": "BaseBdev2", 00:10:03.756 "aliases": [ 00:10:03.756 "125b37a8-92e9-4721-ab5f-55deb0b47d55" 00:10:03.756 ], 00:10:03.756 "product_name": "Malloc disk", 00:10:03.756 "block_size": 512, 00:10:03.756 "num_blocks": 65536, 00:10:03.756 "uuid": "125b37a8-92e9-4721-ab5f-55deb0b47d55", 00:10:03.756 "assigned_rate_limits": { 00:10:03.756 "rw_ios_per_sec": 0, 00:10:03.756 "rw_mbytes_per_sec": 0, 00:10:03.756 "r_mbytes_per_sec": 0, 00:10:03.756 "w_mbytes_per_sec": 0 00:10:03.756 }, 00:10:03.756 "claimed": true, 00:10:03.756 "claim_type": "exclusive_write", 00:10:03.756 "zoned": false, 00:10:03.756 "supported_io_types": { 00:10:03.756 "read": true, 00:10:03.756 "write": true, 00:10:03.756 "unmap": true, 00:10:03.756 "flush": true, 00:10:03.756 "reset": true, 00:10:03.756 "nvme_admin": false, 00:10:03.756 "nvme_io": false, 00:10:03.756 "nvme_io_md": false, 00:10:03.756 "write_zeroes": true, 00:10:03.756 "zcopy": true, 00:10:03.756 "get_zone_info": false, 00:10:03.756 "zone_management": false, 00:10:03.756 "zone_append": false, 00:10:03.756 "compare": false, 00:10:03.756 "compare_and_write": false, 00:10:03.756 "abort": true, 00:10:03.756 "seek_hole": false, 00:10:03.756 "seek_data": false, 00:10:03.756 "copy": true, 00:10:03.756 "nvme_iov_md": false 00:10:03.756 }, 00:10:03.756 "memory_domains": [ 00:10:03.756 { 00:10:03.756 "dma_device_id": "system", 00:10:03.756 "dma_device_type": 1 00:10:03.756 }, 00:10:03.756 { 00:10:03.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.756 "dma_device_type": 2 00:10:03.756 } 00:10:03.756 ], 00:10:03.756 "driver_specific": {} 00:10:03.756 } 00:10:03.756 ] 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.756 "name": "Existed_Raid", 00:10:03.756 "uuid": "b639ac18-5d63-405b-83d3-27598e8b56ed", 00:10:03.756 "strip_size_kb": 64, 00:10:03.756 "state": "configuring", 00:10:03.756 "raid_level": "raid0", 00:10:03.756 "superblock": true, 00:10:03.756 "num_base_bdevs": 3, 00:10:03.756 "num_base_bdevs_discovered": 2, 00:10:03.756 "num_base_bdevs_operational": 3, 00:10:03.756 "base_bdevs_list": [ 00:10:03.756 { 00:10:03.756 "name": "BaseBdev1", 00:10:03.756 "uuid": "416718bf-1167-4a16-971d-e33ca9534dd8", 00:10:03.756 "is_configured": true, 00:10:03.756 "data_offset": 2048, 00:10:03.756 "data_size": 63488 00:10:03.756 }, 00:10:03.756 { 00:10:03.756 "name": "BaseBdev2", 00:10:03.756 "uuid": "125b37a8-92e9-4721-ab5f-55deb0b47d55", 00:10:03.756 "is_configured": true, 00:10:03.756 "data_offset": 2048, 00:10:03.756 "data_size": 63488 00:10:03.756 }, 00:10:03.756 { 00:10:03.756 "name": "BaseBdev3", 00:10:03.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.756 "is_configured": false, 00:10:03.756 "data_offset": 0, 00:10:03.756 "data_size": 0 00:10:03.756 } 00:10:03.756 ] 00:10:03.756 }' 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.756 14:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.325 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:04.325 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.325 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.325 [2024-11-20 14:20:43.218376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.325 [2024-11-20 14:20:43.218733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:04.325 [2024-11-20 14:20:43.218763] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:04.325 BaseBdev3 00:10:04.325 [2024-11-20 14:20:43.219141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:04.325 [2024-11-20 14:20:43.219367] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:04.325 [2024-11-20 14:20:43.219390] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:04.325 [2024-11-20 14:20:43.219570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.325 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.325 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:04.325 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:04.325 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.325 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:04.325 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.325 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.325 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.325 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.325 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.325 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.325 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:04.325 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.325 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.325 [ 00:10:04.325 { 00:10:04.325 "name": "BaseBdev3", 00:10:04.325 "aliases": [ 00:10:04.325 "6cdfa262-52b6-444c-8382-8da89b16bc81" 00:10:04.325 ], 00:10:04.325 "product_name": "Malloc disk", 00:10:04.326 "block_size": 512, 00:10:04.326 "num_blocks": 65536, 00:10:04.326 "uuid": "6cdfa262-52b6-444c-8382-8da89b16bc81", 00:10:04.326 "assigned_rate_limits": { 00:10:04.326 "rw_ios_per_sec": 0, 00:10:04.326 "rw_mbytes_per_sec": 0, 00:10:04.326 "r_mbytes_per_sec": 0, 00:10:04.326 "w_mbytes_per_sec": 0 00:10:04.326 }, 00:10:04.326 "claimed": true, 00:10:04.326 "claim_type": "exclusive_write", 00:10:04.326 "zoned": false, 00:10:04.326 "supported_io_types": { 00:10:04.326 "read": true, 00:10:04.326 "write": true, 00:10:04.326 "unmap": true, 00:10:04.326 "flush": true, 00:10:04.326 "reset": true, 00:10:04.326 "nvme_admin": false, 00:10:04.326 "nvme_io": false, 00:10:04.326 "nvme_io_md": false, 00:10:04.326 "write_zeroes": true, 00:10:04.326 "zcopy": true, 00:10:04.326 "get_zone_info": false, 00:10:04.326 "zone_management": false, 00:10:04.326 "zone_append": false, 00:10:04.326 "compare": false, 00:10:04.326 "compare_and_write": false, 00:10:04.326 "abort": true, 00:10:04.326 "seek_hole": false, 00:10:04.326 "seek_data": false, 00:10:04.326 "copy": true, 00:10:04.326 "nvme_iov_md": false 00:10:04.326 }, 00:10:04.326 "memory_domains": [ 00:10:04.326 { 00:10:04.326 "dma_device_id": "system", 00:10:04.326 "dma_device_type": 1 00:10:04.326 }, 00:10:04.326 { 00:10:04.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.326 "dma_device_type": 2 00:10:04.326 } 00:10:04.326 ], 00:10:04.326 "driver_specific": {} 00:10:04.326 } 00:10:04.326 ] 00:10:04.326 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.326 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:04.326 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:04.326 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.326 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:04.326 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.326 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.326 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.326 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.326 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.326 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.326 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.326 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.326 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.326 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.326 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.326 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.326 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.326 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.601 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.601 "name": "Existed_Raid", 00:10:04.601 "uuid": "b639ac18-5d63-405b-83d3-27598e8b56ed", 00:10:04.601 "strip_size_kb": 64, 00:10:04.601 "state": "online", 00:10:04.601 "raid_level": "raid0", 00:10:04.601 "superblock": true, 00:10:04.601 "num_base_bdevs": 3, 00:10:04.601 "num_base_bdevs_discovered": 3, 00:10:04.601 "num_base_bdevs_operational": 3, 00:10:04.601 "base_bdevs_list": [ 00:10:04.601 { 00:10:04.601 "name": "BaseBdev1", 00:10:04.601 "uuid": "416718bf-1167-4a16-971d-e33ca9534dd8", 00:10:04.601 "is_configured": true, 00:10:04.601 "data_offset": 2048, 00:10:04.601 "data_size": 63488 00:10:04.601 }, 00:10:04.601 { 00:10:04.601 "name": "BaseBdev2", 00:10:04.601 "uuid": "125b37a8-92e9-4721-ab5f-55deb0b47d55", 00:10:04.601 "is_configured": true, 00:10:04.601 "data_offset": 2048, 00:10:04.601 "data_size": 63488 00:10:04.601 }, 00:10:04.601 { 00:10:04.601 "name": "BaseBdev3", 00:10:04.601 "uuid": "6cdfa262-52b6-444c-8382-8da89b16bc81", 00:10:04.601 "is_configured": true, 00:10:04.601 "data_offset": 2048, 00:10:04.601 "data_size": 63488 00:10:04.601 } 00:10:04.601 ] 00:10:04.601 }' 00:10:04.601 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.601 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.881 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:04.881 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:04.881 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:04.881 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:04.881 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:04.881 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:04.881 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:04.881 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:04.881 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.881 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.881 [2024-11-20 14:20:43.807035] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.881 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.881 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:04.881 "name": "Existed_Raid", 00:10:04.881 "aliases": [ 00:10:04.881 "b639ac18-5d63-405b-83d3-27598e8b56ed" 00:10:04.881 ], 00:10:04.881 "product_name": "Raid Volume", 00:10:04.881 "block_size": 512, 00:10:04.881 "num_blocks": 190464, 00:10:04.881 "uuid": "b639ac18-5d63-405b-83d3-27598e8b56ed", 00:10:04.881 "assigned_rate_limits": { 00:10:04.881 "rw_ios_per_sec": 0, 00:10:04.881 "rw_mbytes_per_sec": 0, 00:10:04.881 "r_mbytes_per_sec": 0, 00:10:04.881 "w_mbytes_per_sec": 0 00:10:04.881 }, 00:10:04.881 "claimed": false, 00:10:04.881 "zoned": false, 00:10:04.881 "supported_io_types": { 00:10:04.881 "read": true, 00:10:04.881 "write": true, 00:10:04.881 "unmap": true, 00:10:04.881 "flush": true, 00:10:04.881 "reset": true, 00:10:04.881 "nvme_admin": false, 00:10:04.881 "nvme_io": false, 00:10:04.881 "nvme_io_md": false, 00:10:04.881 "write_zeroes": true, 00:10:04.881 "zcopy": false, 00:10:04.881 "get_zone_info": false, 00:10:04.881 "zone_management": false, 00:10:04.881 "zone_append": false, 00:10:04.881 "compare": false, 00:10:04.881 "compare_and_write": false, 00:10:04.881 "abort": false, 00:10:04.881 "seek_hole": false, 00:10:04.881 "seek_data": false, 00:10:04.881 "copy": false, 00:10:04.881 "nvme_iov_md": false 00:10:04.881 }, 00:10:04.881 "memory_domains": [ 00:10:04.881 { 00:10:04.881 "dma_device_id": "system", 00:10:04.881 "dma_device_type": 1 00:10:04.881 }, 00:10:04.881 { 00:10:04.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.881 "dma_device_type": 2 00:10:04.881 }, 00:10:04.881 { 00:10:04.881 "dma_device_id": "system", 00:10:04.881 "dma_device_type": 1 00:10:04.881 }, 00:10:04.881 { 00:10:04.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.881 "dma_device_type": 2 00:10:04.881 }, 00:10:04.881 { 00:10:04.881 "dma_device_id": "system", 00:10:04.881 "dma_device_type": 1 00:10:04.881 }, 00:10:04.881 { 00:10:04.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.881 "dma_device_type": 2 00:10:04.881 } 00:10:04.881 ], 00:10:04.881 "driver_specific": { 00:10:04.881 "raid": { 00:10:04.881 "uuid": "b639ac18-5d63-405b-83d3-27598e8b56ed", 00:10:04.881 "strip_size_kb": 64, 00:10:04.881 "state": "online", 00:10:04.881 "raid_level": "raid0", 00:10:04.881 "superblock": true, 00:10:04.881 "num_base_bdevs": 3, 00:10:04.881 "num_base_bdevs_discovered": 3, 00:10:04.881 "num_base_bdevs_operational": 3, 00:10:04.881 "base_bdevs_list": [ 00:10:04.881 { 00:10:04.881 "name": "BaseBdev1", 00:10:04.881 "uuid": "416718bf-1167-4a16-971d-e33ca9534dd8", 00:10:04.881 "is_configured": true, 00:10:04.881 "data_offset": 2048, 00:10:04.881 "data_size": 63488 00:10:04.881 }, 00:10:04.881 { 00:10:04.881 "name": "BaseBdev2", 00:10:04.881 "uuid": "125b37a8-92e9-4721-ab5f-55deb0b47d55", 00:10:04.881 "is_configured": true, 00:10:04.881 "data_offset": 2048, 00:10:04.881 "data_size": 63488 00:10:04.881 }, 00:10:04.881 { 00:10:04.881 "name": "BaseBdev3", 00:10:04.881 "uuid": "6cdfa262-52b6-444c-8382-8da89b16bc81", 00:10:04.881 "is_configured": true, 00:10:04.881 "data_offset": 2048, 00:10:04.881 "data_size": 63488 00:10:04.881 } 00:10:04.881 ] 00:10:04.881 } 00:10:04.881 } 00:10:04.881 }' 00:10:04.881 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:05.141 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:05.141 BaseBdev2 00:10:05.141 BaseBdev3' 00:10:05.141 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.141 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:05.141 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.141 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:05.141 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.141 14:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.141 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.141 14:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.141 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.141 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.141 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.141 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:05.141 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.141 14:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.141 14:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.141 14:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.141 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.141 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.141 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.141 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:05.141 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.141 14:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.141 14:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.141 14:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.399 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.399 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.399 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:05.399 14:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.399 14:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.399 [2024-11-20 14:20:44.130772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:05.399 [2024-11-20 14:20:44.130822] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:05.400 [2024-11-20 14:20:44.130891] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.400 14:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.400 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:05.400 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:05.400 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:05.400 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:05.400 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:05.400 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:05.400 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.400 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:05.400 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.400 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.400 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:05.400 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.400 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.400 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.400 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.400 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.400 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.400 14:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.400 14:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.400 14:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.400 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.400 "name": "Existed_Raid", 00:10:05.400 "uuid": "b639ac18-5d63-405b-83d3-27598e8b56ed", 00:10:05.400 "strip_size_kb": 64, 00:10:05.400 "state": "offline", 00:10:05.400 "raid_level": "raid0", 00:10:05.400 "superblock": true, 00:10:05.400 "num_base_bdevs": 3, 00:10:05.400 "num_base_bdevs_discovered": 2, 00:10:05.400 "num_base_bdevs_operational": 2, 00:10:05.400 "base_bdevs_list": [ 00:10:05.400 { 00:10:05.400 "name": null, 00:10:05.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.400 "is_configured": false, 00:10:05.400 "data_offset": 0, 00:10:05.400 "data_size": 63488 00:10:05.400 }, 00:10:05.400 { 00:10:05.400 "name": "BaseBdev2", 00:10:05.400 "uuid": "125b37a8-92e9-4721-ab5f-55deb0b47d55", 00:10:05.400 "is_configured": true, 00:10:05.400 "data_offset": 2048, 00:10:05.400 "data_size": 63488 00:10:05.400 }, 00:10:05.400 { 00:10:05.400 "name": "BaseBdev3", 00:10:05.400 "uuid": "6cdfa262-52b6-444c-8382-8da89b16bc81", 00:10:05.400 "is_configured": true, 00:10:05.400 "data_offset": 2048, 00:10:05.400 "data_size": 63488 00:10:05.400 } 00:10:05.400 ] 00:10:05.400 }' 00:10:05.400 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.400 14:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.967 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:05.967 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.967 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.967 14:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.967 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:05.967 14:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.967 14:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.967 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:05.967 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:05.967 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:05.967 14:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.967 14:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.967 [2024-11-20 14:20:44.806892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:05.967 14:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.967 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:05.967 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.967 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.967 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:05.967 14:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.967 14:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.967 14:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.227 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:06.227 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.227 14:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:06.227 14:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.227 14:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.227 [2024-11-20 14:20:44.953341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:06.227 [2024-11-20 14:20:44.953447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.227 BaseBdev2 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.227 [ 00:10:06.227 { 00:10:06.227 "name": "BaseBdev2", 00:10:06.227 "aliases": [ 00:10:06.227 "20dd811f-016b-4270-8a25-3273b6b1ce59" 00:10:06.227 ], 00:10:06.227 "product_name": "Malloc disk", 00:10:06.227 "block_size": 512, 00:10:06.227 "num_blocks": 65536, 00:10:06.227 "uuid": "20dd811f-016b-4270-8a25-3273b6b1ce59", 00:10:06.227 "assigned_rate_limits": { 00:10:06.227 "rw_ios_per_sec": 0, 00:10:06.227 "rw_mbytes_per_sec": 0, 00:10:06.227 "r_mbytes_per_sec": 0, 00:10:06.227 "w_mbytes_per_sec": 0 00:10:06.227 }, 00:10:06.227 "claimed": false, 00:10:06.227 "zoned": false, 00:10:06.227 "supported_io_types": { 00:10:06.227 "read": true, 00:10:06.227 "write": true, 00:10:06.227 "unmap": true, 00:10:06.227 "flush": true, 00:10:06.227 "reset": true, 00:10:06.227 "nvme_admin": false, 00:10:06.227 "nvme_io": false, 00:10:06.227 "nvme_io_md": false, 00:10:06.227 "write_zeroes": true, 00:10:06.227 "zcopy": true, 00:10:06.227 "get_zone_info": false, 00:10:06.227 "zone_management": false, 00:10:06.227 "zone_append": false, 00:10:06.227 "compare": false, 00:10:06.227 "compare_and_write": false, 00:10:06.227 "abort": true, 00:10:06.227 "seek_hole": false, 00:10:06.227 "seek_data": false, 00:10:06.227 "copy": true, 00:10:06.227 "nvme_iov_md": false 00:10:06.227 }, 00:10:06.227 "memory_domains": [ 00:10:06.227 { 00:10:06.227 "dma_device_id": "system", 00:10:06.227 "dma_device_type": 1 00:10:06.227 }, 00:10:06.227 { 00:10:06.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.227 "dma_device_type": 2 00:10:06.227 } 00:10:06.227 ], 00:10:06.227 "driver_specific": {} 00:10:06.227 } 00:10:06.227 ] 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.227 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.487 BaseBdev3 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.487 [ 00:10:06.487 { 00:10:06.487 "name": "BaseBdev3", 00:10:06.487 "aliases": [ 00:10:06.487 "999e6818-5bcd-45f1-8d92-3e8a0300fd04" 00:10:06.487 ], 00:10:06.487 "product_name": "Malloc disk", 00:10:06.487 "block_size": 512, 00:10:06.487 "num_blocks": 65536, 00:10:06.487 "uuid": "999e6818-5bcd-45f1-8d92-3e8a0300fd04", 00:10:06.487 "assigned_rate_limits": { 00:10:06.487 "rw_ios_per_sec": 0, 00:10:06.487 "rw_mbytes_per_sec": 0, 00:10:06.487 "r_mbytes_per_sec": 0, 00:10:06.487 "w_mbytes_per_sec": 0 00:10:06.487 }, 00:10:06.487 "claimed": false, 00:10:06.487 "zoned": false, 00:10:06.487 "supported_io_types": { 00:10:06.487 "read": true, 00:10:06.487 "write": true, 00:10:06.487 "unmap": true, 00:10:06.487 "flush": true, 00:10:06.487 "reset": true, 00:10:06.487 "nvme_admin": false, 00:10:06.487 "nvme_io": false, 00:10:06.487 "nvme_io_md": false, 00:10:06.487 "write_zeroes": true, 00:10:06.487 "zcopy": true, 00:10:06.487 "get_zone_info": false, 00:10:06.487 "zone_management": false, 00:10:06.487 "zone_append": false, 00:10:06.487 "compare": false, 00:10:06.487 "compare_and_write": false, 00:10:06.487 "abort": true, 00:10:06.487 "seek_hole": false, 00:10:06.487 "seek_data": false, 00:10:06.487 "copy": true, 00:10:06.487 "nvme_iov_md": false 00:10:06.487 }, 00:10:06.487 "memory_domains": [ 00:10:06.487 { 00:10:06.487 "dma_device_id": "system", 00:10:06.487 "dma_device_type": 1 00:10:06.487 }, 00:10:06.487 { 00:10:06.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.487 "dma_device_type": 2 00:10:06.487 } 00:10:06.487 ], 00:10:06.487 "driver_specific": {} 00:10:06.487 } 00:10:06.487 ] 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.487 [2024-11-20 14:20:45.259567] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.487 [2024-11-20 14:20:45.259621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.487 [2024-11-20 14:20:45.259666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.487 [2024-11-20 14:20:45.262133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.487 "name": "Existed_Raid", 00:10:06.487 "uuid": "b1197f27-57c5-44e1-8ebc-1cf79e60d0eb", 00:10:06.487 "strip_size_kb": 64, 00:10:06.487 "state": "configuring", 00:10:06.487 "raid_level": "raid0", 00:10:06.487 "superblock": true, 00:10:06.487 "num_base_bdevs": 3, 00:10:06.487 "num_base_bdevs_discovered": 2, 00:10:06.487 "num_base_bdevs_operational": 3, 00:10:06.487 "base_bdevs_list": [ 00:10:06.487 { 00:10:06.487 "name": "BaseBdev1", 00:10:06.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.487 "is_configured": false, 00:10:06.487 "data_offset": 0, 00:10:06.487 "data_size": 0 00:10:06.487 }, 00:10:06.487 { 00:10:06.487 "name": "BaseBdev2", 00:10:06.487 "uuid": "20dd811f-016b-4270-8a25-3273b6b1ce59", 00:10:06.487 "is_configured": true, 00:10:06.487 "data_offset": 2048, 00:10:06.487 "data_size": 63488 00:10:06.487 }, 00:10:06.487 { 00:10:06.487 "name": "BaseBdev3", 00:10:06.487 "uuid": "999e6818-5bcd-45f1-8d92-3e8a0300fd04", 00:10:06.487 "is_configured": true, 00:10:06.487 "data_offset": 2048, 00:10:06.487 "data_size": 63488 00:10:06.487 } 00:10:06.487 ] 00:10:06.487 }' 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.487 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.057 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:07.057 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.057 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.057 [2024-11-20 14:20:45.791739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:07.057 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.057 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:07.057 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.057 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.057 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.057 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.057 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.057 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.057 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.057 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.057 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.057 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.057 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.057 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.057 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.057 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.057 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.057 "name": "Existed_Raid", 00:10:07.057 "uuid": "b1197f27-57c5-44e1-8ebc-1cf79e60d0eb", 00:10:07.057 "strip_size_kb": 64, 00:10:07.057 "state": "configuring", 00:10:07.057 "raid_level": "raid0", 00:10:07.057 "superblock": true, 00:10:07.057 "num_base_bdevs": 3, 00:10:07.057 "num_base_bdevs_discovered": 1, 00:10:07.057 "num_base_bdevs_operational": 3, 00:10:07.057 "base_bdevs_list": [ 00:10:07.057 { 00:10:07.057 "name": "BaseBdev1", 00:10:07.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.057 "is_configured": false, 00:10:07.057 "data_offset": 0, 00:10:07.057 "data_size": 0 00:10:07.057 }, 00:10:07.057 { 00:10:07.057 "name": null, 00:10:07.057 "uuid": "20dd811f-016b-4270-8a25-3273b6b1ce59", 00:10:07.057 "is_configured": false, 00:10:07.057 "data_offset": 0, 00:10:07.057 "data_size": 63488 00:10:07.057 }, 00:10:07.057 { 00:10:07.057 "name": "BaseBdev3", 00:10:07.057 "uuid": "999e6818-5bcd-45f1-8d92-3e8a0300fd04", 00:10:07.057 "is_configured": true, 00:10:07.057 "data_offset": 2048, 00:10:07.057 "data_size": 63488 00:10:07.057 } 00:10:07.057 ] 00:10:07.057 }' 00:10:07.057 14:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.057 14:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.625 [2024-11-20 14:20:46.398561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.625 BaseBdev1 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.625 [ 00:10:07.625 { 00:10:07.625 "name": "BaseBdev1", 00:10:07.625 "aliases": [ 00:10:07.625 "26233438-96d4-478a-b1d1-c48966b3a8bc" 00:10:07.625 ], 00:10:07.625 "product_name": "Malloc disk", 00:10:07.625 "block_size": 512, 00:10:07.625 "num_blocks": 65536, 00:10:07.625 "uuid": "26233438-96d4-478a-b1d1-c48966b3a8bc", 00:10:07.625 "assigned_rate_limits": { 00:10:07.625 "rw_ios_per_sec": 0, 00:10:07.625 "rw_mbytes_per_sec": 0, 00:10:07.625 "r_mbytes_per_sec": 0, 00:10:07.625 "w_mbytes_per_sec": 0 00:10:07.625 }, 00:10:07.625 "claimed": true, 00:10:07.625 "claim_type": "exclusive_write", 00:10:07.625 "zoned": false, 00:10:07.625 "supported_io_types": { 00:10:07.625 "read": true, 00:10:07.625 "write": true, 00:10:07.625 "unmap": true, 00:10:07.625 "flush": true, 00:10:07.625 "reset": true, 00:10:07.625 "nvme_admin": false, 00:10:07.625 "nvme_io": false, 00:10:07.625 "nvme_io_md": false, 00:10:07.625 "write_zeroes": true, 00:10:07.625 "zcopy": true, 00:10:07.625 "get_zone_info": false, 00:10:07.625 "zone_management": false, 00:10:07.625 "zone_append": false, 00:10:07.625 "compare": false, 00:10:07.625 "compare_and_write": false, 00:10:07.625 "abort": true, 00:10:07.625 "seek_hole": false, 00:10:07.625 "seek_data": false, 00:10:07.625 "copy": true, 00:10:07.625 "nvme_iov_md": false 00:10:07.625 }, 00:10:07.625 "memory_domains": [ 00:10:07.625 { 00:10:07.625 "dma_device_id": "system", 00:10:07.625 "dma_device_type": 1 00:10:07.625 }, 00:10:07.625 { 00:10:07.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.625 "dma_device_type": 2 00:10:07.625 } 00:10:07.625 ], 00:10:07.625 "driver_specific": {} 00:10:07.625 } 00:10:07.625 ] 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.625 "name": "Existed_Raid", 00:10:07.625 "uuid": "b1197f27-57c5-44e1-8ebc-1cf79e60d0eb", 00:10:07.625 "strip_size_kb": 64, 00:10:07.625 "state": "configuring", 00:10:07.625 "raid_level": "raid0", 00:10:07.625 "superblock": true, 00:10:07.625 "num_base_bdevs": 3, 00:10:07.625 "num_base_bdevs_discovered": 2, 00:10:07.625 "num_base_bdevs_operational": 3, 00:10:07.625 "base_bdevs_list": [ 00:10:07.625 { 00:10:07.625 "name": "BaseBdev1", 00:10:07.625 "uuid": "26233438-96d4-478a-b1d1-c48966b3a8bc", 00:10:07.625 "is_configured": true, 00:10:07.625 "data_offset": 2048, 00:10:07.625 "data_size": 63488 00:10:07.625 }, 00:10:07.625 { 00:10:07.625 "name": null, 00:10:07.625 "uuid": "20dd811f-016b-4270-8a25-3273b6b1ce59", 00:10:07.625 "is_configured": false, 00:10:07.625 "data_offset": 0, 00:10:07.625 "data_size": 63488 00:10:07.625 }, 00:10:07.625 { 00:10:07.625 "name": "BaseBdev3", 00:10:07.625 "uuid": "999e6818-5bcd-45f1-8d92-3e8a0300fd04", 00:10:07.625 "is_configured": true, 00:10:07.625 "data_offset": 2048, 00:10:07.625 "data_size": 63488 00:10:07.625 } 00:10:07.625 ] 00:10:07.625 }' 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.625 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.192 14:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.192 14:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:08.192 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.192 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.192 14:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.192 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:08.192 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:08.192 14:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.192 14:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.192 [2024-11-20 14:20:47.006795] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:08.192 14:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.192 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:08.192 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.192 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.192 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.192 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.192 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.192 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.192 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.192 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.192 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.192 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.192 14:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.192 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.192 14:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.192 14:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.192 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.192 "name": "Existed_Raid", 00:10:08.192 "uuid": "b1197f27-57c5-44e1-8ebc-1cf79e60d0eb", 00:10:08.192 "strip_size_kb": 64, 00:10:08.192 "state": "configuring", 00:10:08.192 "raid_level": "raid0", 00:10:08.192 "superblock": true, 00:10:08.192 "num_base_bdevs": 3, 00:10:08.192 "num_base_bdevs_discovered": 1, 00:10:08.192 "num_base_bdevs_operational": 3, 00:10:08.192 "base_bdevs_list": [ 00:10:08.192 { 00:10:08.192 "name": "BaseBdev1", 00:10:08.192 "uuid": "26233438-96d4-478a-b1d1-c48966b3a8bc", 00:10:08.192 "is_configured": true, 00:10:08.192 "data_offset": 2048, 00:10:08.192 "data_size": 63488 00:10:08.192 }, 00:10:08.192 { 00:10:08.192 "name": null, 00:10:08.192 "uuid": "20dd811f-016b-4270-8a25-3273b6b1ce59", 00:10:08.192 "is_configured": false, 00:10:08.192 "data_offset": 0, 00:10:08.192 "data_size": 63488 00:10:08.192 }, 00:10:08.192 { 00:10:08.192 "name": null, 00:10:08.192 "uuid": "999e6818-5bcd-45f1-8d92-3e8a0300fd04", 00:10:08.192 "is_configured": false, 00:10:08.192 "data_offset": 0, 00:10:08.192 "data_size": 63488 00:10:08.192 } 00:10:08.192 ] 00:10:08.192 }' 00:10:08.192 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.192 14:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.760 [2024-11-20 14:20:47.594974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.760 "name": "Existed_Raid", 00:10:08.760 "uuid": "b1197f27-57c5-44e1-8ebc-1cf79e60d0eb", 00:10:08.760 "strip_size_kb": 64, 00:10:08.760 "state": "configuring", 00:10:08.760 "raid_level": "raid0", 00:10:08.760 "superblock": true, 00:10:08.760 "num_base_bdevs": 3, 00:10:08.760 "num_base_bdevs_discovered": 2, 00:10:08.760 "num_base_bdevs_operational": 3, 00:10:08.760 "base_bdevs_list": [ 00:10:08.760 { 00:10:08.760 "name": "BaseBdev1", 00:10:08.760 "uuid": "26233438-96d4-478a-b1d1-c48966b3a8bc", 00:10:08.760 "is_configured": true, 00:10:08.760 "data_offset": 2048, 00:10:08.760 "data_size": 63488 00:10:08.760 }, 00:10:08.760 { 00:10:08.760 "name": null, 00:10:08.760 "uuid": "20dd811f-016b-4270-8a25-3273b6b1ce59", 00:10:08.760 "is_configured": false, 00:10:08.760 "data_offset": 0, 00:10:08.760 "data_size": 63488 00:10:08.760 }, 00:10:08.760 { 00:10:08.760 "name": "BaseBdev3", 00:10:08.760 "uuid": "999e6818-5bcd-45f1-8d92-3e8a0300fd04", 00:10:08.760 "is_configured": true, 00:10:08.760 "data_offset": 2048, 00:10:08.760 "data_size": 63488 00:10:08.760 } 00:10:08.760 ] 00:10:08.760 }' 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.760 14:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.330 [2024-11-20 14:20:48.175188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.330 14:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.598 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.598 "name": "Existed_Raid", 00:10:09.598 "uuid": "b1197f27-57c5-44e1-8ebc-1cf79e60d0eb", 00:10:09.598 "strip_size_kb": 64, 00:10:09.598 "state": "configuring", 00:10:09.598 "raid_level": "raid0", 00:10:09.598 "superblock": true, 00:10:09.598 "num_base_bdevs": 3, 00:10:09.598 "num_base_bdevs_discovered": 1, 00:10:09.598 "num_base_bdevs_operational": 3, 00:10:09.598 "base_bdevs_list": [ 00:10:09.598 { 00:10:09.598 "name": null, 00:10:09.598 "uuid": "26233438-96d4-478a-b1d1-c48966b3a8bc", 00:10:09.598 "is_configured": false, 00:10:09.598 "data_offset": 0, 00:10:09.598 "data_size": 63488 00:10:09.598 }, 00:10:09.598 { 00:10:09.598 "name": null, 00:10:09.598 "uuid": "20dd811f-016b-4270-8a25-3273b6b1ce59", 00:10:09.598 "is_configured": false, 00:10:09.598 "data_offset": 0, 00:10:09.598 "data_size": 63488 00:10:09.598 }, 00:10:09.598 { 00:10:09.598 "name": "BaseBdev3", 00:10:09.598 "uuid": "999e6818-5bcd-45f1-8d92-3e8a0300fd04", 00:10:09.598 "is_configured": true, 00:10:09.598 "data_offset": 2048, 00:10:09.598 "data_size": 63488 00:10:09.598 } 00:10:09.598 ] 00:10:09.598 }' 00:10:09.598 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.598 14:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.857 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.857 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:09.857 14:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.857 14:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.857 14:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.857 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:09.857 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:09.857 14:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.857 14:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.116 [2024-11-20 14:20:48.836322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.116 14:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.116 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:10.116 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.116 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.116 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.116 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.116 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.116 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.116 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.116 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.116 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.116 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.116 14:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.116 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.116 14:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.116 14:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.116 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.116 "name": "Existed_Raid", 00:10:10.116 "uuid": "b1197f27-57c5-44e1-8ebc-1cf79e60d0eb", 00:10:10.116 "strip_size_kb": 64, 00:10:10.116 "state": "configuring", 00:10:10.116 "raid_level": "raid0", 00:10:10.116 "superblock": true, 00:10:10.116 "num_base_bdevs": 3, 00:10:10.117 "num_base_bdevs_discovered": 2, 00:10:10.117 "num_base_bdevs_operational": 3, 00:10:10.117 "base_bdevs_list": [ 00:10:10.117 { 00:10:10.117 "name": null, 00:10:10.117 "uuid": "26233438-96d4-478a-b1d1-c48966b3a8bc", 00:10:10.117 "is_configured": false, 00:10:10.117 "data_offset": 0, 00:10:10.117 "data_size": 63488 00:10:10.117 }, 00:10:10.117 { 00:10:10.117 "name": "BaseBdev2", 00:10:10.117 "uuid": "20dd811f-016b-4270-8a25-3273b6b1ce59", 00:10:10.117 "is_configured": true, 00:10:10.117 "data_offset": 2048, 00:10:10.117 "data_size": 63488 00:10:10.117 }, 00:10:10.117 { 00:10:10.117 "name": "BaseBdev3", 00:10:10.117 "uuid": "999e6818-5bcd-45f1-8d92-3e8a0300fd04", 00:10:10.117 "is_configured": true, 00:10:10.117 "data_offset": 2048, 00:10:10.117 "data_size": 63488 00:10:10.117 } 00:10:10.117 ] 00:10:10.117 }' 00:10:10.117 14:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.117 14:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 26233438-96d4-478a-b1d1-c48966b3a8bc 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.685 [2024-11-20 14:20:49.511709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:10.685 [2024-11-20 14:20:49.512283] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:10.685 [2024-11-20 14:20:49.512315] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:10.685 [2024-11-20 14:20:49.512630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:10.685 NewBaseBdev 00:10:10.685 [2024-11-20 14:20:49.512866] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:10.685 [2024-11-20 14:20:49.512884] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:10.685 [2024-11-20 14:20:49.513063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.685 [ 00:10:10.685 { 00:10:10.685 "name": "NewBaseBdev", 00:10:10.685 "aliases": [ 00:10:10.685 "26233438-96d4-478a-b1d1-c48966b3a8bc" 00:10:10.685 ], 00:10:10.685 "product_name": "Malloc disk", 00:10:10.685 "block_size": 512, 00:10:10.685 "num_blocks": 65536, 00:10:10.685 "uuid": "26233438-96d4-478a-b1d1-c48966b3a8bc", 00:10:10.685 "assigned_rate_limits": { 00:10:10.685 "rw_ios_per_sec": 0, 00:10:10.685 "rw_mbytes_per_sec": 0, 00:10:10.685 "r_mbytes_per_sec": 0, 00:10:10.685 "w_mbytes_per_sec": 0 00:10:10.685 }, 00:10:10.685 "claimed": true, 00:10:10.685 "claim_type": "exclusive_write", 00:10:10.685 "zoned": false, 00:10:10.685 "supported_io_types": { 00:10:10.685 "read": true, 00:10:10.685 "write": true, 00:10:10.685 "unmap": true, 00:10:10.685 "flush": true, 00:10:10.685 "reset": true, 00:10:10.685 "nvme_admin": false, 00:10:10.685 "nvme_io": false, 00:10:10.685 "nvme_io_md": false, 00:10:10.685 "write_zeroes": true, 00:10:10.685 "zcopy": true, 00:10:10.685 "get_zone_info": false, 00:10:10.685 "zone_management": false, 00:10:10.685 "zone_append": false, 00:10:10.685 "compare": false, 00:10:10.685 "compare_and_write": false, 00:10:10.685 "abort": true, 00:10:10.685 "seek_hole": false, 00:10:10.685 "seek_data": false, 00:10:10.685 "copy": true, 00:10:10.685 "nvme_iov_md": false 00:10:10.685 }, 00:10:10.685 "memory_domains": [ 00:10:10.685 { 00:10:10.685 "dma_device_id": "system", 00:10:10.685 "dma_device_type": 1 00:10:10.685 }, 00:10:10.685 { 00:10:10.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.685 "dma_device_type": 2 00:10:10.685 } 00:10:10.685 ], 00:10:10.685 "driver_specific": {} 00:10:10.685 } 00:10:10.685 ] 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.685 "name": "Existed_Raid", 00:10:10.685 "uuid": "b1197f27-57c5-44e1-8ebc-1cf79e60d0eb", 00:10:10.685 "strip_size_kb": 64, 00:10:10.685 "state": "online", 00:10:10.685 "raid_level": "raid0", 00:10:10.685 "superblock": true, 00:10:10.685 "num_base_bdevs": 3, 00:10:10.685 "num_base_bdevs_discovered": 3, 00:10:10.685 "num_base_bdevs_operational": 3, 00:10:10.685 "base_bdevs_list": [ 00:10:10.685 { 00:10:10.685 "name": "NewBaseBdev", 00:10:10.685 "uuid": "26233438-96d4-478a-b1d1-c48966b3a8bc", 00:10:10.685 "is_configured": true, 00:10:10.685 "data_offset": 2048, 00:10:10.685 "data_size": 63488 00:10:10.685 }, 00:10:10.685 { 00:10:10.685 "name": "BaseBdev2", 00:10:10.685 "uuid": "20dd811f-016b-4270-8a25-3273b6b1ce59", 00:10:10.685 "is_configured": true, 00:10:10.685 "data_offset": 2048, 00:10:10.685 "data_size": 63488 00:10:10.685 }, 00:10:10.685 { 00:10:10.685 "name": "BaseBdev3", 00:10:10.685 "uuid": "999e6818-5bcd-45f1-8d92-3e8a0300fd04", 00:10:10.685 "is_configured": true, 00:10:10.685 "data_offset": 2048, 00:10:10.685 "data_size": 63488 00:10:10.685 } 00:10:10.685 ] 00:10:10.685 }' 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.685 14:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.254 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:11.254 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:11.254 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:11.254 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:11.254 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:11.254 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:11.254 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:11.254 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.254 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.254 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:11.254 [2024-11-20 14:20:50.112340] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.254 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.254 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.254 "name": "Existed_Raid", 00:10:11.254 "aliases": [ 00:10:11.254 "b1197f27-57c5-44e1-8ebc-1cf79e60d0eb" 00:10:11.254 ], 00:10:11.254 "product_name": "Raid Volume", 00:10:11.254 "block_size": 512, 00:10:11.254 "num_blocks": 190464, 00:10:11.254 "uuid": "b1197f27-57c5-44e1-8ebc-1cf79e60d0eb", 00:10:11.254 "assigned_rate_limits": { 00:10:11.254 "rw_ios_per_sec": 0, 00:10:11.254 "rw_mbytes_per_sec": 0, 00:10:11.254 "r_mbytes_per_sec": 0, 00:10:11.254 "w_mbytes_per_sec": 0 00:10:11.254 }, 00:10:11.254 "claimed": false, 00:10:11.254 "zoned": false, 00:10:11.254 "supported_io_types": { 00:10:11.254 "read": true, 00:10:11.254 "write": true, 00:10:11.254 "unmap": true, 00:10:11.254 "flush": true, 00:10:11.254 "reset": true, 00:10:11.254 "nvme_admin": false, 00:10:11.254 "nvme_io": false, 00:10:11.254 "nvme_io_md": false, 00:10:11.254 "write_zeroes": true, 00:10:11.254 "zcopy": false, 00:10:11.254 "get_zone_info": false, 00:10:11.254 "zone_management": false, 00:10:11.254 "zone_append": false, 00:10:11.254 "compare": false, 00:10:11.254 "compare_and_write": false, 00:10:11.254 "abort": false, 00:10:11.254 "seek_hole": false, 00:10:11.254 "seek_data": false, 00:10:11.254 "copy": false, 00:10:11.254 "nvme_iov_md": false 00:10:11.254 }, 00:10:11.254 "memory_domains": [ 00:10:11.254 { 00:10:11.254 "dma_device_id": "system", 00:10:11.254 "dma_device_type": 1 00:10:11.254 }, 00:10:11.254 { 00:10:11.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.254 "dma_device_type": 2 00:10:11.254 }, 00:10:11.254 { 00:10:11.254 "dma_device_id": "system", 00:10:11.254 "dma_device_type": 1 00:10:11.254 }, 00:10:11.254 { 00:10:11.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.254 "dma_device_type": 2 00:10:11.254 }, 00:10:11.254 { 00:10:11.254 "dma_device_id": "system", 00:10:11.254 "dma_device_type": 1 00:10:11.254 }, 00:10:11.254 { 00:10:11.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.254 "dma_device_type": 2 00:10:11.254 } 00:10:11.254 ], 00:10:11.254 "driver_specific": { 00:10:11.254 "raid": { 00:10:11.254 "uuid": "b1197f27-57c5-44e1-8ebc-1cf79e60d0eb", 00:10:11.254 "strip_size_kb": 64, 00:10:11.254 "state": "online", 00:10:11.254 "raid_level": "raid0", 00:10:11.254 "superblock": true, 00:10:11.254 "num_base_bdevs": 3, 00:10:11.254 "num_base_bdevs_discovered": 3, 00:10:11.254 "num_base_bdevs_operational": 3, 00:10:11.254 "base_bdevs_list": [ 00:10:11.254 { 00:10:11.254 "name": "NewBaseBdev", 00:10:11.254 "uuid": "26233438-96d4-478a-b1d1-c48966b3a8bc", 00:10:11.254 "is_configured": true, 00:10:11.254 "data_offset": 2048, 00:10:11.254 "data_size": 63488 00:10:11.254 }, 00:10:11.254 { 00:10:11.254 "name": "BaseBdev2", 00:10:11.254 "uuid": "20dd811f-016b-4270-8a25-3273b6b1ce59", 00:10:11.254 "is_configured": true, 00:10:11.254 "data_offset": 2048, 00:10:11.254 "data_size": 63488 00:10:11.254 }, 00:10:11.254 { 00:10:11.254 "name": "BaseBdev3", 00:10:11.254 "uuid": "999e6818-5bcd-45f1-8d92-3e8a0300fd04", 00:10:11.255 "is_configured": true, 00:10:11.255 "data_offset": 2048, 00:10:11.255 "data_size": 63488 00:10:11.255 } 00:10:11.255 ] 00:10:11.255 } 00:10:11.255 } 00:10:11.255 }' 00:10:11.255 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.255 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:11.255 BaseBdev2 00:10:11.255 BaseBdev3' 00:10:11.255 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.514 [2024-11-20 14:20:50.444061] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:11.514 [2024-11-20 14:20:50.444096] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.514 [2024-11-20 14:20:50.444196] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.514 [2024-11-20 14:20:50.444267] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.514 [2024-11-20 14:20:50.444286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64450 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64450 ']' 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64450 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64450 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64450' 00:10:11.514 killing process with pid 64450 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64450 00:10:11.514 [2024-11-20 14:20:50.486453] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:11.514 14:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64450 00:10:12.082 [2024-11-20 14:20:50.758738] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:13.016 14:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:13.016 00:10:13.016 real 0m11.980s 00:10:13.016 user 0m19.931s 00:10:13.016 sys 0m1.627s 00:10:13.016 14:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.016 14:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.016 ************************************ 00:10:13.016 END TEST raid_state_function_test_sb 00:10:13.016 ************************************ 00:10:13.016 14:20:51 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:10:13.016 14:20:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:13.016 14:20:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.016 14:20:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:13.016 ************************************ 00:10:13.016 START TEST raid_superblock_test 00:10:13.016 ************************************ 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:13.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65087 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65087 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65087 ']' 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.016 14:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.016 [2024-11-20 14:20:51.979414] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:10:13.016 [2024-11-20 14:20:51.979959] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65087 ] 00:10:13.274 [2024-11-20 14:20:52.166713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.532 [2024-11-20 14:20:52.293026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.532 [2024-11-20 14:20:52.498043] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.532 [2024-11-20 14:20:52.498127] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.100 14:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.100 14:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:14.100 14:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:14.100 14:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:14.100 14:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:14.100 14:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:14.100 14:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:14.100 14:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:14.100 14:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:14.100 14:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:14.100 14:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:14.100 14:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.100 14:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.100 malloc1 00:10:14.100 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.100 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:14.100 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.100 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.100 [2024-11-20 14:20:53.022404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:14.100 [2024-11-20 14:20:53.022654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.100 [2024-11-20 14:20:53.022729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:14.100 [2024-11-20 14:20:53.022909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.100 [2024-11-20 14:20:53.025770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.100 [2024-11-20 14:20:53.025970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:14.100 pt1 00:10:14.100 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.100 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:14.100 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:14.100 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:14.100 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:14.100 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:14.100 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:14.100 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:14.100 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:14.100 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:14.100 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.100 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.100 malloc2 00:10:14.100 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.100 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:14.101 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.101 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.101 [2024-11-20 14:20:53.072742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:14.101 [2024-11-20 14:20:53.072834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.101 [2024-11-20 14:20:53.072869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:14.101 [2024-11-20 14:20:53.072884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.101 [2024-11-20 14:20:53.075744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.101 [2024-11-20 14:20:53.075960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:14.101 pt2 00:10:14.101 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.101 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:14.101 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:14.101 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:14.101 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:14.101 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:14.101 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:14.101 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:14.101 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:14.101 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:14.101 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.101 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.360 malloc3 00:10:14.360 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.360 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:14.360 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.360 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.360 [2024-11-20 14:20:53.135696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:14.360 [2024-11-20 14:20:53.135949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.360 [2024-11-20 14:20:53.135993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:14.360 [2024-11-20 14:20:53.136033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.360 [2024-11-20 14:20:53.138895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.360 [2024-11-20 14:20:53.139070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:14.360 pt3 00:10:14.360 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.360 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:14.360 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:14.360 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:14.360 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.360 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.360 [2024-11-20 14:20:53.147943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:14.360 [2024-11-20 14:20:53.150516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:14.360 [2024-11-20 14:20:53.150601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:14.360 [2024-11-20 14:20:53.150822] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:14.360 [2024-11-20 14:20:53.150845] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:14.360 [2024-11-20 14:20:53.151207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:14.361 [2024-11-20 14:20:53.151499] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:14.361 [2024-11-20 14:20:53.151520] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:14.361 [2024-11-20 14:20:53.151699] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.361 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.361 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:14.361 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.361 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.361 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.361 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.361 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.361 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.361 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.361 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.361 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.361 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.361 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.361 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.361 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.361 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.361 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.361 "name": "raid_bdev1", 00:10:14.361 "uuid": "5d9eb05c-74d8-4881-aa83-5812f475dc5e", 00:10:14.361 "strip_size_kb": 64, 00:10:14.361 "state": "online", 00:10:14.361 "raid_level": "raid0", 00:10:14.361 "superblock": true, 00:10:14.361 "num_base_bdevs": 3, 00:10:14.361 "num_base_bdevs_discovered": 3, 00:10:14.361 "num_base_bdevs_operational": 3, 00:10:14.361 "base_bdevs_list": [ 00:10:14.361 { 00:10:14.361 "name": "pt1", 00:10:14.361 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:14.361 "is_configured": true, 00:10:14.361 "data_offset": 2048, 00:10:14.361 "data_size": 63488 00:10:14.361 }, 00:10:14.361 { 00:10:14.361 "name": "pt2", 00:10:14.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.361 "is_configured": true, 00:10:14.361 "data_offset": 2048, 00:10:14.361 "data_size": 63488 00:10:14.361 }, 00:10:14.361 { 00:10:14.361 "name": "pt3", 00:10:14.361 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:14.361 "is_configured": true, 00:10:14.361 "data_offset": 2048, 00:10:14.361 "data_size": 63488 00:10:14.361 } 00:10:14.361 ] 00:10:14.361 }' 00:10:14.361 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.361 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.957 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:14.957 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:14.957 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:14.957 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:14.957 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:14.957 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:14.957 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:14.957 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:14.957 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.957 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.957 [2024-11-20 14:20:53.660501] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.957 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.957 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:14.957 "name": "raid_bdev1", 00:10:14.957 "aliases": [ 00:10:14.957 "5d9eb05c-74d8-4881-aa83-5812f475dc5e" 00:10:14.957 ], 00:10:14.957 "product_name": "Raid Volume", 00:10:14.957 "block_size": 512, 00:10:14.957 "num_blocks": 190464, 00:10:14.957 "uuid": "5d9eb05c-74d8-4881-aa83-5812f475dc5e", 00:10:14.957 "assigned_rate_limits": { 00:10:14.957 "rw_ios_per_sec": 0, 00:10:14.957 "rw_mbytes_per_sec": 0, 00:10:14.957 "r_mbytes_per_sec": 0, 00:10:14.957 "w_mbytes_per_sec": 0 00:10:14.957 }, 00:10:14.957 "claimed": false, 00:10:14.957 "zoned": false, 00:10:14.957 "supported_io_types": { 00:10:14.957 "read": true, 00:10:14.958 "write": true, 00:10:14.958 "unmap": true, 00:10:14.958 "flush": true, 00:10:14.958 "reset": true, 00:10:14.958 "nvme_admin": false, 00:10:14.958 "nvme_io": false, 00:10:14.958 "nvme_io_md": false, 00:10:14.958 "write_zeroes": true, 00:10:14.958 "zcopy": false, 00:10:14.958 "get_zone_info": false, 00:10:14.958 "zone_management": false, 00:10:14.958 "zone_append": false, 00:10:14.958 "compare": false, 00:10:14.958 "compare_and_write": false, 00:10:14.958 "abort": false, 00:10:14.958 "seek_hole": false, 00:10:14.958 "seek_data": false, 00:10:14.958 "copy": false, 00:10:14.958 "nvme_iov_md": false 00:10:14.958 }, 00:10:14.958 "memory_domains": [ 00:10:14.958 { 00:10:14.958 "dma_device_id": "system", 00:10:14.958 "dma_device_type": 1 00:10:14.958 }, 00:10:14.958 { 00:10:14.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.958 "dma_device_type": 2 00:10:14.958 }, 00:10:14.958 { 00:10:14.958 "dma_device_id": "system", 00:10:14.958 "dma_device_type": 1 00:10:14.958 }, 00:10:14.958 { 00:10:14.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.958 "dma_device_type": 2 00:10:14.958 }, 00:10:14.958 { 00:10:14.958 "dma_device_id": "system", 00:10:14.958 "dma_device_type": 1 00:10:14.958 }, 00:10:14.958 { 00:10:14.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.958 "dma_device_type": 2 00:10:14.958 } 00:10:14.958 ], 00:10:14.958 "driver_specific": { 00:10:14.958 "raid": { 00:10:14.958 "uuid": "5d9eb05c-74d8-4881-aa83-5812f475dc5e", 00:10:14.958 "strip_size_kb": 64, 00:10:14.958 "state": "online", 00:10:14.958 "raid_level": "raid0", 00:10:14.958 "superblock": true, 00:10:14.958 "num_base_bdevs": 3, 00:10:14.958 "num_base_bdevs_discovered": 3, 00:10:14.958 "num_base_bdevs_operational": 3, 00:10:14.958 "base_bdevs_list": [ 00:10:14.958 { 00:10:14.958 "name": "pt1", 00:10:14.958 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:14.958 "is_configured": true, 00:10:14.958 "data_offset": 2048, 00:10:14.958 "data_size": 63488 00:10:14.958 }, 00:10:14.958 { 00:10:14.958 "name": "pt2", 00:10:14.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.958 "is_configured": true, 00:10:14.958 "data_offset": 2048, 00:10:14.958 "data_size": 63488 00:10:14.958 }, 00:10:14.958 { 00:10:14.958 "name": "pt3", 00:10:14.958 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:14.958 "is_configured": true, 00:10:14.958 "data_offset": 2048, 00:10:14.958 "data_size": 63488 00:10:14.958 } 00:10:14.958 ] 00:10:14.958 } 00:10:14.958 } 00:10:14.958 }' 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:14.958 pt2 00:10:14.958 pt3' 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.958 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.217 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.217 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.217 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.217 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:15.217 14:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:15.217 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.217 14:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.217 [2024-11-20 14:20:53.984530] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5d9eb05c-74d8-4881-aa83-5812f475dc5e 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5d9eb05c-74d8-4881-aa83-5812f475dc5e ']' 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.217 [2024-11-20 14:20:54.040220] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:15.217 [2024-11-20 14:20:54.040260] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.217 [2024-11-20 14:20:54.040406] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.217 [2024-11-20 14:20:54.040482] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.217 [2024-11-20 14:20:54.040497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.217 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.217 [2024-11-20 14:20:54.184321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:15.217 [2024-11-20 14:20:54.186850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:15.217 [2024-11-20 14:20:54.186924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:15.217 [2024-11-20 14:20:54.187036] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:15.217 [2024-11-20 14:20:54.187119] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:15.217 [2024-11-20 14:20:54.187154] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:15.218 [2024-11-20 14:20:54.187181] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:15.218 [2024-11-20 14:20:54.187198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:15.218 request: 00:10:15.218 { 00:10:15.218 "name": "raid_bdev1", 00:10:15.218 "raid_level": "raid0", 00:10:15.218 "base_bdevs": [ 00:10:15.218 "malloc1", 00:10:15.218 "malloc2", 00:10:15.218 "malloc3" 00:10:15.218 ], 00:10:15.218 "strip_size_kb": 64, 00:10:15.218 "superblock": false, 00:10:15.218 "method": "bdev_raid_create", 00:10:15.218 "req_id": 1 00:10:15.218 } 00:10:15.218 Got JSON-RPC error response 00:10:15.218 response: 00:10:15.218 { 00:10:15.218 "code": -17, 00:10:15.218 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:15.218 } 00:10:15.218 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:15.218 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:15.218 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:15.218 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:15.218 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:15.218 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.218 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.218 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.218 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.477 [2024-11-20 14:20:54.248276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:15.477 [2024-11-20 14:20:54.248539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.477 [2024-11-20 14:20:54.248614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:15.477 [2024-11-20 14:20:54.248834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.477 [2024-11-20 14:20:54.251846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.477 [2024-11-20 14:20:54.252013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:15.477 [2024-11-20 14:20:54.252233] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:15.477 [2024-11-20 14:20:54.252423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:15.477 pt1 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.477 "name": "raid_bdev1", 00:10:15.477 "uuid": "5d9eb05c-74d8-4881-aa83-5812f475dc5e", 00:10:15.477 "strip_size_kb": 64, 00:10:15.477 "state": "configuring", 00:10:15.477 "raid_level": "raid0", 00:10:15.477 "superblock": true, 00:10:15.477 "num_base_bdevs": 3, 00:10:15.477 "num_base_bdevs_discovered": 1, 00:10:15.477 "num_base_bdevs_operational": 3, 00:10:15.477 "base_bdevs_list": [ 00:10:15.477 { 00:10:15.477 "name": "pt1", 00:10:15.477 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:15.477 "is_configured": true, 00:10:15.477 "data_offset": 2048, 00:10:15.477 "data_size": 63488 00:10:15.477 }, 00:10:15.477 { 00:10:15.477 "name": null, 00:10:15.477 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:15.477 "is_configured": false, 00:10:15.477 "data_offset": 2048, 00:10:15.477 "data_size": 63488 00:10:15.477 }, 00:10:15.477 { 00:10:15.477 "name": null, 00:10:15.477 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:15.477 "is_configured": false, 00:10:15.477 "data_offset": 2048, 00:10:15.477 "data_size": 63488 00:10:15.477 } 00:10:15.477 ] 00:10:15.477 }' 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.477 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.046 [2024-11-20 14:20:54.788521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:16.046 [2024-11-20 14:20:54.788636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.046 [2024-11-20 14:20:54.788675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:16.046 [2024-11-20 14:20:54.788690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.046 [2024-11-20 14:20:54.789316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.046 [2024-11-20 14:20:54.789356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:16.046 [2024-11-20 14:20:54.789466] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:16.046 [2024-11-20 14:20:54.789519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:16.046 pt2 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.046 [2024-11-20 14:20:54.796519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.046 "name": "raid_bdev1", 00:10:16.046 "uuid": "5d9eb05c-74d8-4881-aa83-5812f475dc5e", 00:10:16.046 "strip_size_kb": 64, 00:10:16.046 "state": "configuring", 00:10:16.046 "raid_level": "raid0", 00:10:16.046 "superblock": true, 00:10:16.046 "num_base_bdevs": 3, 00:10:16.046 "num_base_bdevs_discovered": 1, 00:10:16.046 "num_base_bdevs_operational": 3, 00:10:16.046 "base_bdevs_list": [ 00:10:16.046 { 00:10:16.046 "name": "pt1", 00:10:16.046 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:16.046 "is_configured": true, 00:10:16.046 "data_offset": 2048, 00:10:16.046 "data_size": 63488 00:10:16.046 }, 00:10:16.046 { 00:10:16.046 "name": null, 00:10:16.046 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.046 "is_configured": false, 00:10:16.046 "data_offset": 0, 00:10:16.046 "data_size": 63488 00:10:16.046 }, 00:10:16.046 { 00:10:16.046 "name": null, 00:10:16.046 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.046 "is_configured": false, 00:10:16.046 "data_offset": 2048, 00:10:16.046 "data_size": 63488 00:10:16.046 } 00:10:16.046 ] 00:10:16.046 }' 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.046 14:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.616 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:16.616 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:16.616 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.617 [2024-11-20 14:20:55.340683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:16.617 [2024-11-20 14:20:55.340768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.617 [2024-11-20 14:20:55.340797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:16.617 [2024-11-20 14:20:55.340815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.617 [2024-11-20 14:20:55.341403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.617 [2024-11-20 14:20:55.341435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:16.617 [2024-11-20 14:20:55.341531] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:16.617 [2024-11-20 14:20:55.341567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:16.617 pt2 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.617 [2024-11-20 14:20:55.348651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:16.617 [2024-11-20 14:20:55.348855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.617 [2024-11-20 14:20:55.348887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:16.617 [2024-11-20 14:20:55.348905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.617 [2024-11-20 14:20:55.349356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.617 [2024-11-20 14:20:55.349403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:16.617 [2024-11-20 14:20:55.349480] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:16.617 [2024-11-20 14:20:55.349513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:16.617 [2024-11-20 14:20:55.349657] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:16.617 [2024-11-20 14:20:55.349678] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:16.617 [2024-11-20 14:20:55.349978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:16.617 [2024-11-20 14:20:55.350195] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:16.617 [2024-11-20 14:20:55.350210] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:16.617 [2024-11-20 14:20:55.350373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.617 pt3 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.617 "name": "raid_bdev1", 00:10:16.617 "uuid": "5d9eb05c-74d8-4881-aa83-5812f475dc5e", 00:10:16.617 "strip_size_kb": 64, 00:10:16.617 "state": "online", 00:10:16.617 "raid_level": "raid0", 00:10:16.617 "superblock": true, 00:10:16.617 "num_base_bdevs": 3, 00:10:16.617 "num_base_bdevs_discovered": 3, 00:10:16.617 "num_base_bdevs_operational": 3, 00:10:16.617 "base_bdevs_list": [ 00:10:16.617 { 00:10:16.617 "name": "pt1", 00:10:16.617 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:16.617 "is_configured": true, 00:10:16.617 "data_offset": 2048, 00:10:16.617 "data_size": 63488 00:10:16.617 }, 00:10:16.617 { 00:10:16.617 "name": "pt2", 00:10:16.617 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.617 "is_configured": true, 00:10:16.617 "data_offset": 2048, 00:10:16.617 "data_size": 63488 00:10:16.617 }, 00:10:16.617 { 00:10:16.617 "name": "pt3", 00:10:16.617 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.617 "is_configured": true, 00:10:16.617 "data_offset": 2048, 00:10:16.617 "data_size": 63488 00:10:16.617 } 00:10:16.617 ] 00:10:16.617 }' 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.617 14:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.185 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:17.185 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:17.185 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:17.185 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.185 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.185 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.185 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:17.185 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:17.185 14:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.185 14:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.185 [2024-11-20 14:20:55.889222] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.185 14:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.185 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.185 "name": "raid_bdev1", 00:10:17.185 "aliases": [ 00:10:17.185 "5d9eb05c-74d8-4881-aa83-5812f475dc5e" 00:10:17.185 ], 00:10:17.185 "product_name": "Raid Volume", 00:10:17.185 "block_size": 512, 00:10:17.185 "num_blocks": 190464, 00:10:17.185 "uuid": "5d9eb05c-74d8-4881-aa83-5812f475dc5e", 00:10:17.185 "assigned_rate_limits": { 00:10:17.185 "rw_ios_per_sec": 0, 00:10:17.185 "rw_mbytes_per_sec": 0, 00:10:17.185 "r_mbytes_per_sec": 0, 00:10:17.185 "w_mbytes_per_sec": 0 00:10:17.185 }, 00:10:17.185 "claimed": false, 00:10:17.185 "zoned": false, 00:10:17.185 "supported_io_types": { 00:10:17.185 "read": true, 00:10:17.185 "write": true, 00:10:17.185 "unmap": true, 00:10:17.185 "flush": true, 00:10:17.185 "reset": true, 00:10:17.185 "nvme_admin": false, 00:10:17.185 "nvme_io": false, 00:10:17.185 "nvme_io_md": false, 00:10:17.185 "write_zeroes": true, 00:10:17.185 "zcopy": false, 00:10:17.185 "get_zone_info": false, 00:10:17.185 "zone_management": false, 00:10:17.185 "zone_append": false, 00:10:17.185 "compare": false, 00:10:17.185 "compare_and_write": false, 00:10:17.185 "abort": false, 00:10:17.185 "seek_hole": false, 00:10:17.185 "seek_data": false, 00:10:17.185 "copy": false, 00:10:17.185 "nvme_iov_md": false 00:10:17.185 }, 00:10:17.185 "memory_domains": [ 00:10:17.185 { 00:10:17.185 "dma_device_id": "system", 00:10:17.185 "dma_device_type": 1 00:10:17.185 }, 00:10:17.185 { 00:10:17.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.185 "dma_device_type": 2 00:10:17.185 }, 00:10:17.185 { 00:10:17.185 "dma_device_id": "system", 00:10:17.185 "dma_device_type": 1 00:10:17.185 }, 00:10:17.185 { 00:10:17.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.185 "dma_device_type": 2 00:10:17.185 }, 00:10:17.185 { 00:10:17.185 "dma_device_id": "system", 00:10:17.185 "dma_device_type": 1 00:10:17.185 }, 00:10:17.185 { 00:10:17.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.185 "dma_device_type": 2 00:10:17.185 } 00:10:17.185 ], 00:10:17.185 "driver_specific": { 00:10:17.185 "raid": { 00:10:17.185 "uuid": "5d9eb05c-74d8-4881-aa83-5812f475dc5e", 00:10:17.185 "strip_size_kb": 64, 00:10:17.185 "state": "online", 00:10:17.185 "raid_level": "raid0", 00:10:17.185 "superblock": true, 00:10:17.185 "num_base_bdevs": 3, 00:10:17.185 "num_base_bdevs_discovered": 3, 00:10:17.185 "num_base_bdevs_operational": 3, 00:10:17.185 "base_bdevs_list": [ 00:10:17.185 { 00:10:17.185 "name": "pt1", 00:10:17.185 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.185 "is_configured": true, 00:10:17.185 "data_offset": 2048, 00:10:17.185 "data_size": 63488 00:10:17.185 }, 00:10:17.185 { 00:10:17.185 "name": "pt2", 00:10:17.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.185 "is_configured": true, 00:10:17.185 "data_offset": 2048, 00:10:17.185 "data_size": 63488 00:10:17.185 }, 00:10:17.185 { 00:10:17.185 "name": "pt3", 00:10:17.185 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.185 "is_configured": true, 00:10:17.185 "data_offset": 2048, 00:10:17.185 "data_size": 63488 00:10:17.185 } 00:10:17.185 ] 00:10:17.185 } 00:10:17.185 } 00:10:17.185 }' 00:10:17.185 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.185 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:17.185 pt2 00:10:17.185 pt3' 00:10:17.185 14:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.185 14:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.185 14:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.185 14:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:17.185 14:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.185 14:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.185 14:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.185 14:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.185 14:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.185 14:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.185 14:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.185 14:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:17.185 14:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.185 14:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.185 14:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.185 14:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.185 14:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.185 14:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.185 14:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.444 14:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:17.444 14:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.444 14:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.444 14:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.444 14:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.444 14:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.444 14:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.444 14:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:17.444 14:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.444 14:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.444 14:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:17.444 [2024-11-20 14:20:56.217246] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.444 14:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.444 14:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5d9eb05c-74d8-4881-aa83-5812f475dc5e '!=' 5d9eb05c-74d8-4881-aa83-5812f475dc5e ']' 00:10:17.444 14:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:17.444 14:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:17.444 14:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:17.445 14:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65087 00:10:17.445 14:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65087 ']' 00:10:17.445 14:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65087 00:10:17.445 14:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:17.445 14:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.445 14:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65087 00:10:17.445 killing process with pid 65087 00:10:17.445 14:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.445 14:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.445 14:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65087' 00:10:17.445 14:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65087 00:10:17.445 [2024-11-20 14:20:56.307007] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:17.445 14:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65087 00:10:17.445 [2024-11-20 14:20:56.307167] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.445 [2024-11-20 14:20:56.307248] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.445 [2024-11-20 14:20:56.307284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:17.703 [2024-11-20 14:20:56.568344] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:18.638 ************************************ 00:10:18.638 END TEST raid_superblock_test 00:10:18.638 ************************************ 00:10:18.638 14:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:18.638 00:10:18.638 real 0m5.724s 00:10:18.638 user 0m8.646s 00:10:18.638 sys 0m0.864s 00:10:18.638 14:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.638 14:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.897 14:20:57 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:10:18.897 14:20:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:18.897 14:20:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.897 14:20:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:18.897 ************************************ 00:10:18.897 START TEST raid_read_error_test 00:10:18.897 ************************************ 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9SRUPsyTuE 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65345 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65345 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65345 ']' 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.897 14:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.897 [2024-11-20 14:20:57.756974] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:10:18.897 [2024-11-20 14:20:57.757458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65345 ] 00:10:19.156 [2024-11-20 14:20:57.941501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.156 [2024-11-20 14:20:58.067626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.419 [2024-11-20 14:20:58.265111] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.419 [2024-11-20 14:20:58.265177] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.986 BaseBdev1_malloc 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.986 true 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.986 [2024-11-20 14:20:58.804487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:19.986 [2024-11-20 14:20:58.804554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.986 [2024-11-20 14:20:58.804584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:19.986 [2024-11-20 14:20:58.804601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.986 [2024-11-20 14:20:58.807313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.986 [2024-11-20 14:20:58.807364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:19.986 BaseBdev1 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.986 BaseBdev2_malloc 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.986 true 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.986 [2024-11-20 14:20:58.863768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:19.986 [2024-11-20 14:20:58.863834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.986 [2024-11-20 14:20:58.863859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:19.986 [2024-11-20 14:20:58.863876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.986 [2024-11-20 14:20:58.866601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.986 [2024-11-20 14:20:58.866797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:19.986 BaseBdev2 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.986 BaseBdev3_malloc 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.986 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.986 true 00:10:19.987 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.987 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:19.987 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.987 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.987 [2024-11-20 14:20:58.928524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:19.987 [2024-11-20 14:20:58.928747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.987 [2024-11-20 14:20:58.928785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:19.987 [2024-11-20 14:20:58.928804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.987 [2024-11-20 14:20:58.931571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.987 [2024-11-20 14:20:58.931620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:19.987 BaseBdev3 00:10:19.987 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.987 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:19.987 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.987 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.987 [2024-11-20 14:20:58.936620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.987 [2024-11-20 14:20:58.939159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:19.987 [2024-11-20 14:20:58.939386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:19.987 [2024-11-20 14:20:58.939764] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:19.987 [2024-11-20 14:20:58.939892] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:19.987 [2024-11-20 14:20:58.940325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:19.987 [2024-11-20 14:20:58.940661] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:19.987 [2024-11-20 14:20:58.940790] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:19.987 [2024-11-20 14:20:58.941163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.987 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.987 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:19.987 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.987 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.987 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.987 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.987 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.987 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.987 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.987 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.987 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.987 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.987 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.987 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.987 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.987 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.246 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.246 "name": "raid_bdev1", 00:10:20.246 "uuid": "7e73ef18-a1f6-4999-8abf-ff75b21d0ff6", 00:10:20.246 "strip_size_kb": 64, 00:10:20.246 "state": "online", 00:10:20.246 "raid_level": "raid0", 00:10:20.246 "superblock": true, 00:10:20.246 "num_base_bdevs": 3, 00:10:20.246 "num_base_bdevs_discovered": 3, 00:10:20.246 "num_base_bdevs_operational": 3, 00:10:20.246 "base_bdevs_list": [ 00:10:20.246 { 00:10:20.246 "name": "BaseBdev1", 00:10:20.246 "uuid": "a984eb83-ad65-50dd-b222-93c84fdbba17", 00:10:20.246 "is_configured": true, 00:10:20.246 "data_offset": 2048, 00:10:20.246 "data_size": 63488 00:10:20.246 }, 00:10:20.246 { 00:10:20.246 "name": "BaseBdev2", 00:10:20.246 "uuid": "9ec8c126-2fbd-51ae-b8ac-bb424e0ebc6a", 00:10:20.246 "is_configured": true, 00:10:20.246 "data_offset": 2048, 00:10:20.246 "data_size": 63488 00:10:20.246 }, 00:10:20.246 { 00:10:20.246 "name": "BaseBdev3", 00:10:20.246 "uuid": "cd68bb84-c114-595b-8bfa-4ad97c17e16f", 00:10:20.246 "is_configured": true, 00:10:20.246 "data_offset": 2048, 00:10:20.246 "data_size": 63488 00:10:20.246 } 00:10:20.246 ] 00:10:20.246 }' 00:10:20.246 14:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.246 14:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.812 14:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:20.812 14:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:20.812 [2024-11-20 14:20:59.594663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.744 "name": "raid_bdev1", 00:10:21.744 "uuid": "7e73ef18-a1f6-4999-8abf-ff75b21d0ff6", 00:10:21.744 "strip_size_kb": 64, 00:10:21.744 "state": "online", 00:10:21.744 "raid_level": "raid0", 00:10:21.744 "superblock": true, 00:10:21.744 "num_base_bdevs": 3, 00:10:21.744 "num_base_bdevs_discovered": 3, 00:10:21.744 "num_base_bdevs_operational": 3, 00:10:21.744 "base_bdevs_list": [ 00:10:21.744 { 00:10:21.744 "name": "BaseBdev1", 00:10:21.744 "uuid": "a984eb83-ad65-50dd-b222-93c84fdbba17", 00:10:21.744 "is_configured": true, 00:10:21.744 "data_offset": 2048, 00:10:21.744 "data_size": 63488 00:10:21.744 }, 00:10:21.744 { 00:10:21.744 "name": "BaseBdev2", 00:10:21.744 "uuid": "9ec8c126-2fbd-51ae-b8ac-bb424e0ebc6a", 00:10:21.744 "is_configured": true, 00:10:21.744 "data_offset": 2048, 00:10:21.744 "data_size": 63488 00:10:21.744 }, 00:10:21.744 { 00:10:21.744 "name": "BaseBdev3", 00:10:21.744 "uuid": "cd68bb84-c114-595b-8bfa-4ad97c17e16f", 00:10:21.744 "is_configured": true, 00:10:21.744 "data_offset": 2048, 00:10:21.744 "data_size": 63488 00:10:21.744 } 00:10:21.744 ] 00:10:21.744 }' 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.744 14:21:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.308 14:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:22.308 14:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.308 14:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.308 [2024-11-20 14:21:01.026067] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:22.308 [2024-11-20 14:21:01.026100] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.308 [2024-11-20 14:21:01.029741] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.308 { 00:10:22.308 "results": [ 00:10:22.308 { 00:10:22.308 "job": "raid_bdev1", 00:10:22.308 "core_mask": "0x1", 00:10:22.308 "workload": "randrw", 00:10:22.308 "percentage": 50, 00:10:22.308 "status": "finished", 00:10:22.308 "queue_depth": 1, 00:10:22.308 "io_size": 131072, 00:10:22.308 "runtime": 1.429009, 00:10:22.308 "iops": 10701.122246255973, 00:10:22.308 "mibps": 1337.6402807819966, 00:10:22.308 "io_failed": 1, 00:10:22.308 "io_timeout": 0, 00:10:22.308 "avg_latency_us": 130.16322762047997, 00:10:22.308 "min_latency_us": 26.763636363636362, 00:10:22.308 "max_latency_us": 1832.0290909090909 00:10:22.308 } 00:10:22.308 ], 00:10:22.308 "core_count": 1 00:10:22.308 } 00:10:22.308 [2024-11-20 14:21:01.029968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.308 [2024-11-20 14:21:01.030055] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.308 [2024-11-20 14:21:01.030072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:22.308 14:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.308 14:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65345 00:10:22.308 14:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65345 ']' 00:10:22.308 14:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65345 00:10:22.308 14:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:22.308 14:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.308 14:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65345 00:10:22.308 killing process with pid 65345 00:10:22.308 14:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.308 14:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.308 14:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65345' 00:10:22.308 14:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65345 00:10:22.308 [2024-11-20 14:21:01.062805] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.308 14:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65345 00:10:22.308 [2024-11-20 14:21:01.274147] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.715 14:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9SRUPsyTuE 00:10:23.715 14:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:23.715 14:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:23.715 14:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:10:23.715 14:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:23.715 14:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:23.715 14:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:23.715 14:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:10:23.715 00:10:23.715 real 0m4.728s 00:10:23.715 user 0m5.896s 00:10:23.715 sys 0m0.567s 00:10:23.715 14:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.715 ************************************ 00:10:23.715 END TEST raid_read_error_test 00:10:23.715 ************************************ 00:10:23.715 14:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.715 14:21:02 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:10:23.715 14:21:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:23.715 14:21:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.715 14:21:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.715 ************************************ 00:10:23.715 START TEST raid_write_error_test 00:10:23.715 ************************************ 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8E0muTEDEY 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65491 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65491 00:10:23.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65491 ']' 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.715 14:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.715 [2024-11-20 14:21:02.557858] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:10:23.715 [2024-11-20 14:21:02.558684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65491 ] 00:10:23.974 [2024-11-20 14:21:02.748448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.974 [2024-11-20 14:21:02.873112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.233 [2024-11-20 14:21:03.073658] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.233 [2024-11-20 14:21:03.073700] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.802 BaseBdev1_malloc 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.802 true 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.802 [2024-11-20 14:21:03.530403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:24.802 [2024-11-20 14:21:03.530470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.802 [2024-11-20 14:21:03.530500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:24.802 [2024-11-20 14:21:03.530517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.802 [2024-11-20 14:21:03.533218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.802 [2024-11-20 14:21:03.533268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:24.802 BaseBdev1 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.802 BaseBdev2_malloc 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.802 true 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.802 [2024-11-20 14:21:03.590199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:24.802 [2024-11-20 14:21:03.590268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.802 [2024-11-20 14:21:03.590294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:24.802 [2024-11-20 14:21:03.590311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.802 [2024-11-20 14:21:03.593032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.802 [2024-11-20 14:21:03.593080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:24.802 BaseBdev2 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.802 BaseBdev3_malloc 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.802 true 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.802 [2024-11-20 14:21:03.654130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:24.802 [2024-11-20 14:21:03.654197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.802 [2024-11-20 14:21:03.654225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:24.802 [2024-11-20 14:21:03.654243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.802 [2024-11-20 14:21:03.657003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.802 [2024-11-20 14:21:03.657048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:24.802 BaseBdev3 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.802 [2024-11-20 14:21:03.662222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:24.802 [2024-11-20 14:21:03.664606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:24.802 [2024-11-20 14:21:03.664841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:24.802 [2024-11-20 14:21:03.665129] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:24.802 [2024-11-20 14:21:03.665151] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:24.802 [2024-11-20 14:21:03.665452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:24.802 [2024-11-20 14:21:03.665661] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:24.802 [2024-11-20 14:21:03.665683] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:24.802 [2024-11-20 14:21:03.665859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.802 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.803 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.803 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.803 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.803 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.803 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.803 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.803 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.803 "name": "raid_bdev1", 00:10:24.803 "uuid": "9b62d99e-6b2f-4b78-971b-52b13b27dff5", 00:10:24.803 "strip_size_kb": 64, 00:10:24.803 "state": "online", 00:10:24.803 "raid_level": "raid0", 00:10:24.803 "superblock": true, 00:10:24.803 "num_base_bdevs": 3, 00:10:24.803 "num_base_bdevs_discovered": 3, 00:10:24.803 "num_base_bdevs_operational": 3, 00:10:24.803 "base_bdevs_list": [ 00:10:24.803 { 00:10:24.803 "name": "BaseBdev1", 00:10:24.803 "uuid": "2ae459dc-9918-5318-bf27-df397fdc98e6", 00:10:24.803 "is_configured": true, 00:10:24.803 "data_offset": 2048, 00:10:24.803 "data_size": 63488 00:10:24.803 }, 00:10:24.803 { 00:10:24.803 "name": "BaseBdev2", 00:10:24.803 "uuid": "baf6e33e-d9cd-5308-8053-dc54b03c5af5", 00:10:24.803 "is_configured": true, 00:10:24.803 "data_offset": 2048, 00:10:24.803 "data_size": 63488 00:10:24.803 }, 00:10:24.803 { 00:10:24.803 "name": "BaseBdev3", 00:10:24.803 "uuid": "8600f251-f7a6-51b4-9455-ac012311caa1", 00:10:24.803 "is_configured": true, 00:10:24.803 "data_offset": 2048, 00:10:24.803 "data_size": 63488 00:10:24.803 } 00:10:24.803 ] 00:10:24.803 }' 00:10:24.803 14:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.803 14:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.371 14:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:25.371 14:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:25.371 [2024-11-20 14:21:04.251737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:26.308 14:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:26.308 14:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.308 14:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.308 14:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.308 14:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:26.308 14:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:26.308 14:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:26.308 14:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:26.308 14:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.308 14:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.308 14:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.308 14:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.308 14:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.308 14:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.308 14:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.308 14:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.308 14:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.308 14:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.308 14:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.308 14:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.309 14:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.309 14:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.309 14:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.309 "name": "raid_bdev1", 00:10:26.309 "uuid": "9b62d99e-6b2f-4b78-971b-52b13b27dff5", 00:10:26.309 "strip_size_kb": 64, 00:10:26.309 "state": "online", 00:10:26.309 "raid_level": "raid0", 00:10:26.309 "superblock": true, 00:10:26.309 "num_base_bdevs": 3, 00:10:26.309 "num_base_bdevs_discovered": 3, 00:10:26.309 "num_base_bdevs_operational": 3, 00:10:26.309 "base_bdevs_list": [ 00:10:26.309 { 00:10:26.309 "name": "BaseBdev1", 00:10:26.309 "uuid": "2ae459dc-9918-5318-bf27-df397fdc98e6", 00:10:26.309 "is_configured": true, 00:10:26.309 "data_offset": 2048, 00:10:26.309 "data_size": 63488 00:10:26.309 }, 00:10:26.309 { 00:10:26.309 "name": "BaseBdev2", 00:10:26.309 "uuid": "baf6e33e-d9cd-5308-8053-dc54b03c5af5", 00:10:26.309 "is_configured": true, 00:10:26.309 "data_offset": 2048, 00:10:26.309 "data_size": 63488 00:10:26.309 }, 00:10:26.309 { 00:10:26.309 "name": "BaseBdev3", 00:10:26.309 "uuid": "8600f251-f7a6-51b4-9455-ac012311caa1", 00:10:26.309 "is_configured": true, 00:10:26.309 "data_offset": 2048, 00:10:26.309 "data_size": 63488 00:10:26.309 } 00:10:26.309 ] 00:10:26.309 }' 00:10:26.309 14:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.309 14:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.876 14:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:26.876 14:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.876 14:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.876 [2024-11-20 14:21:05.678131] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:26.876 [2024-11-20 14:21:05.678299] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:26.876 [2024-11-20 14:21:05.681725] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.876 [2024-11-20 14:21:05.681910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.876 [2024-11-20 14:21:05.681980] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.876 [2024-11-20 14:21:05.682014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:26.876 { 00:10:26.876 "results": [ 00:10:26.876 { 00:10:26.876 "job": "raid_bdev1", 00:10:26.876 "core_mask": "0x1", 00:10:26.876 "workload": "randrw", 00:10:26.876 "percentage": 50, 00:10:26.876 "status": "finished", 00:10:26.876 "queue_depth": 1, 00:10:26.876 "io_size": 131072, 00:10:26.876 "runtime": 1.42407, 00:10:26.876 "iops": 11146.221744717604, 00:10:26.876 "mibps": 1393.2777180897006, 00:10:26.876 "io_failed": 1, 00:10:26.876 "io_timeout": 0, 00:10:26.876 "avg_latency_us": 124.63710630304558, 00:10:26.876 "min_latency_us": 26.88, 00:10:26.876 "max_latency_us": 1891.6072727272726 00:10:26.876 } 00:10:26.876 ], 00:10:26.876 "core_count": 1 00:10:26.876 } 00:10:26.876 14:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.876 14:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65491 00:10:26.876 14:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65491 ']' 00:10:26.876 14:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65491 00:10:26.876 14:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:26.876 14:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.876 14:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65491 00:10:26.876 killing process with pid 65491 00:10:26.876 14:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:26.876 14:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:26.876 14:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65491' 00:10:26.876 14:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65491 00:10:26.876 [2024-11-20 14:21:05.717425] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:26.876 14:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65491 00:10:27.135 [2024-11-20 14:21:05.919783] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:28.072 14:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8E0muTEDEY 00:10:28.072 14:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:28.072 14:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:28.072 14:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:10:28.072 14:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:28.072 14:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:28.072 14:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:28.072 14:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:10:28.072 00:10:28.072 real 0m4.596s 00:10:28.072 user 0m5.645s 00:10:28.072 sys 0m0.546s 00:10:28.072 14:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.072 ************************************ 00:10:28.072 END TEST raid_write_error_test 00:10:28.072 ************************************ 00:10:28.072 14:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.331 14:21:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:28.332 14:21:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:10:28.332 14:21:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:28.332 14:21:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.332 14:21:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:28.332 ************************************ 00:10:28.332 START TEST raid_state_function_test 00:10:28.332 ************************************ 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:28.332 Process raid pid: 65635 00:10:28.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65635 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65635' 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65635 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65635 ']' 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.332 14:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.332 [2024-11-20 14:21:07.184449] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:10:28.332 [2024-11-20 14:21:07.184894] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.591 [2024-11-20 14:21:07.370369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.591 [2024-11-20 14:21:07.497639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.849 [2024-11-20 14:21:07.709408] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.849 [2024-11-20 14:21:07.709668] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.416 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.416 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:29.416 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:29.416 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.416 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.416 [2024-11-20 14:21:08.181799] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:29.416 [2024-11-20 14:21:08.182013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:29.416 [2024-11-20 14:21:08.182040] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:29.416 [2024-11-20 14:21:08.182059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:29.417 [2024-11-20 14:21:08.182069] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:29.417 [2024-11-20 14:21:08.182083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:29.417 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.417 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:29.417 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.417 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.417 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.417 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.417 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.417 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.417 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.417 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.417 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.417 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.417 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.417 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.417 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.417 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.417 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.417 "name": "Existed_Raid", 00:10:29.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.417 "strip_size_kb": 64, 00:10:29.417 "state": "configuring", 00:10:29.417 "raid_level": "concat", 00:10:29.417 "superblock": false, 00:10:29.417 "num_base_bdevs": 3, 00:10:29.417 "num_base_bdevs_discovered": 0, 00:10:29.417 "num_base_bdevs_operational": 3, 00:10:29.417 "base_bdevs_list": [ 00:10:29.417 { 00:10:29.417 "name": "BaseBdev1", 00:10:29.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.417 "is_configured": false, 00:10:29.417 "data_offset": 0, 00:10:29.417 "data_size": 0 00:10:29.417 }, 00:10:29.417 { 00:10:29.417 "name": "BaseBdev2", 00:10:29.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.417 "is_configured": false, 00:10:29.417 "data_offset": 0, 00:10:29.417 "data_size": 0 00:10:29.417 }, 00:10:29.417 { 00:10:29.417 "name": "BaseBdev3", 00:10:29.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.417 "is_configured": false, 00:10:29.417 "data_offset": 0, 00:10:29.417 "data_size": 0 00:10:29.417 } 00:10:29.417 ] 00:10:29.417 }' 00:10:29.417 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.417 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.986 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:29.986 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.986 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.986 [2024-11-20 14:21:08.741891] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:29.986 [2024-11-20 14:21:08.742080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:29.986 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.986 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:29.986 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.986 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.986 [2024-11-20 14:21:08.749890] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:29.986 [2024-11-20 14:21:08.750063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:29.986 [2024-11-20 14:21:08.750089] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:29.986 [2024-11-20 14:21:08.750107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:29.986 [2024-11-20 14:21:08.750117] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:29.986 [2024-11-20 14:21:08.750130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:29.986 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.986 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:29.986 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.986 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.986 [2024-11-20 14:21:08.794261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:29.986 BaseBdev1 00:10:29.986 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.986 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:29.986 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:29.986 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.986 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:29.986 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.986 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.986 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.987 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.987 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.987 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.987 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:29.987 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.987 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.987 [ 00:10:29.987 { 00:10:29.987 "name": "BaseBdev1", 00:10:29.987 "aliases": [ 00:10:29.987 "fd0fa478-9c56-46aa-a38d-213f2f13dcbe" 00:10:29.987 ], 00:10:29.987 "product_name": "Malloc disk", 00:10:29.987 "block_size": 512, 00:10:29.987 "num_blocks": 65536, 00:10:29.987 "uuid": "fd0fa478-9c56-46aa-a38d-213f2f13dcbe", 00:10:29.987 "assigned_rate_limits": { 00:10:29.987 "rw_ios_per_sec": 0, 00:10:29.987 "rw_mbytes_per_sec": 0, 00:10:29.987 "r_mbytes_per_sec": 0, 00:10:29.987 "w_mbytes_per_sec": 0 00:10:29.987 }, 00:10:29.987 "claimed": true, 00:10:29.987 "claim_type": "exclusive_write", 00:10:29.987 "zoned": false, 00:10:29.987 "supported_io_types": { 00:10:29.987 "read": true, 00:10:29.987 "write": true, 00:10:29.987 "unmap": true, 00:10:29.987 "flush": true, 00:10:29.987 "reset": true, 00:10:29.987 "nvme_admin": false, 00:10:29.987 "nvme_io": false, 00:10:29.987 "nvme_io_md": false, 00:10:29.987 "write_zeroes": true, 00:10:29.987 "zcopy": true, 00:10:29.987 "get_zone_info": false, 00:10:29.987 "zone_management": false, 00:10:29.987 "zone_append": false, 00:10:29.987 "compare": false, 00:10:29.987 "compare_and_write": false, 00:10:29.987 "abort": true, 00:10:29.987 "seek_hole": false, 00:10:29.987 "seek_data": false, 00:10:29.987 "copy": true, 00:10:29.987 "nvme_iov_md": false 00:10:29.987 }, 00:10:29.987 "memory_domains": [ 00:10:29.987 { 00:10:29.987 "dma_device_id": "system", 00:10:29.987 "dma_device_type": 1 00:10:29.987 }, 00:10:29.987 { 00:10:29.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.987 "dma_device_type": 2 00:10:29.987 } 00:10:29.987 ], 00:10:29.987 "driver_specific": {} 00:10:29.987 } 00:10:29.987 ] 00:10:29.988 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.988 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:29.988 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:29.988 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.988 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.988 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.988 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.988 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.988 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.988 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.988 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.988 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.988 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.988 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.988 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.988 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.988 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.988 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.988 "name": "Existed_Raid", 00:10:29.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.988 "strip_size_kb": 64, 00:10:29.988 "state": "configuring", 00:10:29.988 "raid_level": "concat", 00:10:29.988 "superblock": false, 00:10:29.988 "num_base_bdevs": 3, 00:10:29.988 "num_base_bdevs_discovered": 1, 00:10:29.988 "num_base_bdevs_operational": 3, 00:10:29.988 "base_bdevs_list": [ 00:10:29.988 { 00:10:29.988 "name": "BaseBdev1", 00:10:29.988 "uuid": "fd0fa478-9c56-46aa-a38d-213f2f13dcbe", 00:10:29.988 "is_configured": true, 00:10:29.988 "data_offset": 0, 00:10:29.988 "data_size": 65536 00:10:29.988 }, 00:10:29.988 { 00:10:29.988 "name": "BaseBdev2", 00:10:29.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.988 "is_configured": false, 00:10:29.988 "data_offset": 0, 00:10:29.988 "data_size": 0 00:10:29.988 }, 00:10:29.988 { 00:10:29.989 "name": "BaseBdev3", 00:10:29.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.989 "is_configured": false, 00:10:29.989 "data_offset": 0, 00:10:29.989 "data_size": 0 00:10:29.989 } 00:10:29.989 ] 00:10:29.989 }' 00:10:29.989 14:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.989 14:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.558 [2024-11-20 14:21:09.350458] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:30.558 [2024-11-20 14:21:09.350519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.558 [2024-11-20 14:21:09.358503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.558 [2024-11-20 14:21:09.361005] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:30.558 [2024-11-20 14:21:09.361172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:30.558 [2024-11-20 14:21:09.361292] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:30.558 [2024-11-20 14:21:09.361352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.558 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.558 "name": "Existed_Raid", 00:10:30.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.559 "strip_size_kb": 64, 00:10:30.559 "state": "configuring", 00:10:30.559 "raid_level": "concat", 00:10:30.559 "superblock": false, 00:10:30.559 "num_base_bdevs": 3, 00:10:30.559 "num_base_bdevs_discovered": 1, 00:10:30.559 "num_base_bdevs_operational": 3, 00:10:30.559 "base_bdevs_list": [ 00:10:30.559 { 00:10:30.559 "name": "BaseBdev1", 00:10:30.559 "uuid": "fd0fa478-9c56-46aa-a38d-213f2f13dcbe", 00:10:30.559 "is_configured": true, 00:10:30.559 "data_offset": 0, 00:10:30.559 "data_size": 65536 00:10:30.559 }, 00:10:30.559 { 00:10:30.559 "name": "BaseBdev2", 00:10:30.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.559 "is_configured": false, 00:10:30.559 "data_offset": 0, 00:10:30.559 "data_size": 0 00:10:30.559 }, 00:10:30.559 { 00:10:30.559 "name": "BaseBdev3", 00:10:30.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.559 "is_configured": false, 00:10:30.559 "data_offset": 0, 00:10:30.559 "data_size": 0 00:10:30.559 } 00:10:30.559 ] 00:10:30.559 }' 00:10:30.559 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.559 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.127 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:31.127 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.127 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.127 [2024-11-20 14:21:09.917507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.127 BaseBdev2 00:10:31.127 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.127 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:31.127 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:31.127 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:31.127 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:31.127 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:31.127 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:31.127 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:31.127 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.127 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.127 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.127 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:31.127 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.127 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.127 [ 00:10:31.127 { 00:10:31.127 "name": "BaseBdev2", 00:10:31.127 "aliases": [ 00:10:31.127 "58044859-edfe-4f4f-ac8c-8ffb6bba9571" 00:10:31.127 ], 00:10:31.127 "product_name": "Malloc disk", 00:10:31.127 "block_size": 512, 00:10:31.127 "num_blocks": 65536, 00:10:31.127 "uuid": "58044859-edfe-4f4f-ac8c-8ffb6bba9571", 00:10:31.127 "assigned_rate_limits": { 00:10:31.127 "rw_ios_per_sec": 0, 00:10:31.127 "rw_mbytes_per_sec": 0, 00:10:31.127 "r_mbytes_per_sec": 0, 00:10:31.127 "w_mbytes_per_sec": 0 00:10:31.127 }, 00:10:31.127 "claimed": true, 00:10:31.127 "claim_type": "exclusive_write", 00:10:31.127 "zoned": false, 00:10:31.127 "supported_io_types": { 00:10:31.127 "read": true, 00:10:31.127 "write": true, 00:10:31.127 "unmap": true, 00:10:31.127 "flush": true, 00:10:31.127 "reset": true, 00:10:31.127 "nvme_admin": false, 00:10:31.127 "nvme_io": false, 00:10:31.127 "nvme_io_md": false, 00:10:31.127 "write_zeroes": true, 00:10:31.127 "zcopy": true, 00:10:31.127 "get_zone_info": false, 00:10:31.127 "zone_management": false, 00:10:31.127 "zone_append": false, 00:10:31.127 "compare": false, 00:10:31.127 "compare_and_write": false, 00:10:31.127 "abort": true, 00:10:31.127 "seek_hole": false, 00:10:31.127 "seek_data": false, 00:10:31.127 "copy": true, 00:10:31.127 "nvme_iov_md": false 00:10:31.127 }, 00:10:31.127 "memory_domains": [ 00:10:31.127 { 00:10:31.127 "dma_device_id": "system", 00:10:31.127 "dma_device_type": 1 00:10:31.127 }, 00:10:31.127 { 00:10:31.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.127 "dma_device_type": 2 00:10:31.127 } 00:10:31.127 ], 00:10:31.127 "driver_specific": {} 00:10:31.127 } 00:10:31.127 ] 00:10:31.127 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.127 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:31.127 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:31.127 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:31.128 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:31.128 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.128 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.128 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.128 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.128 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.128 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.128 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.128 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.128 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.128 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.128 14:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.128 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.128 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.128 14:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.128 14:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.128 "name": "Existed_Raid", 00:10:31.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.128 "strip_size_kb": 64, 00:10:31.128 "state": "configuring", 00:10:31.128 "raid_level": "concat", 00:10:31.128 "superblock": false, 00:10:31.128 "num_base_bdevs": 3, 00:10:31.128 "num_base_bdevs_discovered": 2, 00:10:31.128 "num_base_bdevs_operational": 3, 00:10:31.128 "base_bdevs_list": [ 00:10:31.128 { 00:10:31.128 "name": "BaseBdev1", 00:10:31.128 "uuid": "fd0fa478-9c56-46aa-a38d-213f2f13dcbe", 00:10:31.128 "is_configured": true, 00:10:31.128 "data_offset": 0, 00:10:31.128 "data_size": 65536 00:10:31.128 }, 00:10:31.128 { 00:10:31.128 "name": "BaseBdev2", 00:10:31.128 "uuid": "58044859-edfe-4f4f-ac8c-8ffb6bba9571", 00:10:31.128 "is_configured": true, 00:10:31.128 "data_offset": 0, 00:10:31.128 "data_size": 65536 00:10:31.128 }, 00:10:31.128 { 00:10:31.128 "name": "BaseBdev3", 00:10:31.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.128 "is_configured": false, 00:10:31.128 "data_offset": 0, 00:10:31.128 "data_size": 0 00:10:31.128 } 00:10:31.128 ] 00:10:31.128 }' 00:10:31.128 14:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.128 14:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.695 14:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:31.695 14:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.695 14:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.695 [2024-11-20 14:21:10.521533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.695 [2024-11-20 14:21:10.521582] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:31.695 [2024-11-20 14:21:10.521602] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:31.695 [2024-11-20 14:21:10.521944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:31.695 [2024-11-20 14:21:10.522217] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:31.695 [2024-11-20 14:21:10.522242] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:31.695 [2024-11-20 14:21:10.522533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.695 BaseBdev3 00:10:31.695 14:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.695 14:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:31.695 14:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:31.695 14:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:31.695 14:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:31.695 14:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:31.695 14:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:31.695 14:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:31.695 14:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.695 14:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.695 14:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.695 14:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:31.695 14:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.695 14:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.695 [ 00:10:31.695 { 00:10:31.695 "name": "BaseBdev3", 00:10:31.695 "aliases": [ 00:10:31.695 "266fc3f5-51d3-4888-abf3-7c5bc27511d4" 00:10:31.695 ], 00:10:31.695 "product_name": "Malloc disk", 00:10:31.695 "block_size": 512, 00:10:31.695 "num_blocks": 65536, 00:10:31.695 "uuid": "266fc3f5-51d3-4888-abf3-7c5bc27511d4", 00:10:31.695 "assigned_rate_limits": { 00:10:31.695 "rw_ios_per_sec": 0, 00:10:31.695 "rw_mbytes_per_sec": 0, 00:10:31.695 "r_mbytes_per_sec": 0, 00:10:31.695 "w_mbytes_per_sec": 0 00:10:31.695 }, 00:10:31.695 "claimed": true, 00:10:31.695 "claim_type": "exclusive_write", 00:10:31.695 "zoned": false, 00:10:31.695 "supported_io_types": { 00:10:31.695 "read": true, 00:10:31.695 "write": true, 00:10:31.695 "unmap": true, 00:10:31.695 "flush": true, 00:10:31.696 "reset": true, 00:10:31.696 "nvme_admin": false, 00:10:31.696 "nvme_io": false, 00:10:31.696 "nvme_io_md": false, 00:10:31.696 "write_zeroes": true, 00:10:31.696 "zcopy": true, 00:10:31.696 "get_zone_info": false, 00:10:31.696 "zone_management": false, 00:10:31.696 "zone_append": false, 00:10:31.696 "compare": false, 00:10:31.696 "compare_and_write": false, 00:10:31.696 "abort": true, 00:10:31.696 "seek_hole": false, 00:10:31.696 "seek_data": false, 00:10:31.696 "copy": true, 00:10:31.696 "nvme_iov_md": false 00:10:31.696 }, 00:10:31.696 "memory_domains": [ 00:10:31.696 { 00:10:31.696 "dma_device_id": "system", 00:10:31.696 "dma_device_type": 1 00:10:31.696 }, 00:10:31.696 { 00:10:31.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.696 "dma_device_type": 2 00:10:31.696 } 00:10:31.696 ], 00:10:31.696 "driver_specific": {} 00:10:31.696 } 00:10:31.696 ] 00:10:31.696 14:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.696 14:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:31.696 14:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:31.696 14:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:31.696 14:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:31.696 14:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.696 14:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.696 14:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.696 14:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.696 14:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.696 14:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.696 14:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.696 14:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.696 14:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.696 14:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.696 14:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.696 14:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.696 14:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.696 14:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.696 14:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.696 "name": "Existed_Raid", 00:10:31.696 "uuid": "7d73fd53-f8bd-496e-904c-2356266c1b65", 00:10:31.696 "strip_size_kb": 64, 00:10:31.696 "state": "online", 00:10:31.696 "raid_level": "concat", 00:10:31.696 "superblock": false, 00:10:31.696 "num_base_bdevs": 3, 00:10:31.696 "num_base_bdevs_discovered": 3, 00:10:31.696 "num_base_bdevs_operational": 3, 00:10:31.696 "base_bdevs_list": [ 00:10:31.696 { 00:10:31.696 "name": "BaseBdev1", 00:10:31.696 "uuid": "fd0fa478-9c56-46aa-a38d-213f2f13dcbe", 00:10:31.696 "is_configured": true, 00:10:31.696 "data_offset": 0, 00:10:31.696 "data_size": 65536 00:10:31.696 }, 00:10:31.696 { 00:10:31.696 "name": "BaseBdev2", 00:10:31.696 "uuid": "58044859-edfe-4f4f-ac8c-8ffb6bba9571", 00:10:31.696 "is_configured": true, 00:10:31.696 "data_offset": 0, 00:10:31.696 "data_size": 65536 00:10:31.696 }, 00:10:31.696 { 00:10:31.696 "name": "BaseBdev3", 00:10:31.696 "uuid": "266fc3f5-51d3-4888-abf3-7c5bc27511d4", 00:10:31.696 "is_configured": true, 00:10:31.696 "data_offset": 0, 00:10:31.696 "data_size": 65536 00:10:31.696 } 00:10:31.696 ] 00:10:31.696 }' 00:10:31.696 14:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.696 14:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.264 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:32.264 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:32.264 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:32.264 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:32.264 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:32.264 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:32.264 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:32.264 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:32.264 14:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.264 14:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.264 [2024-11-20 14:21:11.118134] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:32.264 14:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.264 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:32.264 "name": "Existed_Raid", 00:10:32.264 "aliases": [ 00:10:32.264 "7d73fd53-f8bd-496e-904c-2356266c1b65" 00:10:32.264 ], 00:10:32.264 "product_name": "Raid Volume", 00:10:32.264 "block_size": 512, 00:10:32.264 "num_blocks": 196608, 00:10:32.264 "uuid": "7d73fd53-f8bd-496e-904c-2356266c1b65", 00:10:32.264 "assigned_rate_limits": { 00:10:32.264 "rw_ios_per_sec": 0, 00:10:32.264 "rw_mbytes_per_sec": 0, 00:10:32.264 "r_mbytes_per_sec": 0, 00:10:32.264 "w_mbytes_per_sec": 0 00:10:32.264 }, 00:10:32.264 "claimed": false, 00:10:32.264 "zoned": false, 00:10:32.264 "supported_io_types": { 00:10:32.264 "read": true, 00:10:32.264 "write": true, 00:10:32.264 "unmap": true, 00:10:32.264 "flush": true, 00:10:32.264 "reset": true, 00:10:32.264 "nvme_admin": false, 00:10:32.264 "nvme_io": false, 00:10:32.264 "nvme_io_md": false, 00:10:32.264 "write_zeroes": true, 00:10:32.264 "zcopy": false, 00:10:32.264 "get_zone_info": false, 00:10:32.264 "zone_management": false, 00:10:32.264 "zone_append": false, 00:10:32.264 "compare": false, 00:10:32.264 "compare_and_write": false, 00:10:32.264 "abort": false, 00:10:32.264 "seek_hole": false, 00:10:32.264 "seek_data": false, 00:10:32.264 "copy": false, 00:10:32.264 "nvme_iov_md": false 00:10:32.264 }, 00:10:32.264 "memory_domains": [ 00:10:32.264 { 00:10:32.264 "dma_device_id": "system", 00:10:32.264 "dma_device_type": 1 00:10:32.264 }, 00:10:32.264 { 00:10:32.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.264 "dma_device_type": 2 00:10:32.264 }, 00:10:32.264 { 00:10:32.264 "dma_device_id": "system", 00:10:32.264 "dma_device_type": 1 00:10:32.264 }, 00:10:32.264 { 00:10:32.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.264 "dma_device_type": 2 00:10:32.264 }, 00:10:32.264 { 00:10:32.264 "dma_device_id": "system", 00:10:32.264 "dma_device_type": 1 00:10:32.264 }, 00:10:32.264 { 00:10:32.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.264 "dma_device_type": 2 00:10:32.264 } 00:10:32.264 ], 00:10:32.264 "driver_specific": { 00:10:32.264 "raid": { 00:10:32.264 "uuid": "7d73fd53-f8bd-496e-904c-2356266c1b65", 00:10:32.264 "strip_size_kb": 64, 00:10:32.264 "state": "online", 00:10:32.264 "raid_level": "concat", 00:10:32.264 "superblock": false, 00:10:32.264 "num_base_bdevs": 3, 00:10:32.264 "num_base_bdevs_discovered": 3, 00:10:32.264 "num_base_bdevs_operational": 3, 00:10:32.264 "base_bdevs_list": [ 00:10:32.264 { 00:10:32.264 "name": "BaseBdev1", 00:10:32.264 "uuid": "fd0fa478-9c56-46aa-a38d-213f2f13dcbe", 00:10:32.264 "is_configured": true, 00:10:32.264 "data_offset": 0, 00:10:32.264 "data_size": 65536 00:10:32.264 }, 00:10:32.264 { 00:10:32.264 "name": "BaseBdev2", 00:10:32.264 "uuid": "58044859-edfe-4f4f-ac8c-8ffb6bba9571", 00:10:32.264 "is_configured": true, 00:10:32.264 "data_offset": 0, 00:10:32.265 "data_size": 65536 00:10:32.265 }, 00:10:32.265 { 00:10:32.265 "name": "BaseBdev3", 00:10:32.265 "uuid": "266fc3f5-51d3-4888-abf3-7c5bc27511d4", 00:10:32.265 "is_configured": true, 00:10:32.265 "data_offset": 0, 00:10:32.265 "data_size": 65536 00:10:32.265 } 00:10:32.265 ] 00:10:32.265 } 00:10:32.265 } 00:10:32.265 }' 00:10:32.265 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:32.265 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:32.265 BaseBdev2 00:10:32.265 BaseBdev3' 00:10:32.265 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.574 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:32.574 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.574 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.574 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:32.574 14:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.574 14:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.574 14:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.575 [2024-11-20 14:21:11.425825] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.575 [2024-11-20 14:21:11.425975] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.575 [2024-11-20 14:21:11.426081] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.575 14:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.833 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.833 "name": "Existed_Raid", 00:10:32.833 "uuid": "7d73fd53-f8bd-496e-904c-2356266c1b65", 00:10:32.833 "strip_size_kb": 64, 00:10:32.833 "state": "offline", 00:10:32.833 "raid_level": "concat", 00:10:32.833 "superblock": false, 00:10:32.833 "num_base_bdevs": 3, 00:10:32.833 "num_base_bdevs_discovered": 2, 00:10:32.833 "num_base_bdevs_operational": 2, 00:10:32.833 "base_bdevs_list": [ 00:10:32.833 { 00:10:32.833 "name": null, 00:10:32.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.833 "is_configured": false, 00:10:32.833 "data_offset": 0, 00:10:32.833 "data_size": 65536 00:10:32.833 }, 00:10:32.833 { 00:10:32.833 "name": "BaseBdev2", 00:10:32.833 "uuid": "58044859-edfe-4f4f-ac8c-8ffb6bba9571", 00:10:32.833 "is_configured": true, 00:10:32.833 "data_offset": 0, 00:10:32.833 "data_size": 65536 00:10:32.833 }, 00:10:32.833 { 00:10:32.833 "name": "BaseBdev3", 00:10:32.833 "uuid": "266fc3f5-51d3-4888-abf3-7c5bc27511d4", 00:10:32.833 "is_configured": true, 00:10:32.833 "data_offset": 0, 00:10:32.833 "data_size": 65536 00:10:32.833 } 00:10:32.833 ] 00:10:32.833 }' 00:10:32.833 14:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.833 14:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.093 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:33.093 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:33.093 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.093 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:33.093 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.093 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.093 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.353 [2024-11-20 14:21:12.090507] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.353 [2024-11-20 14:21:12.230530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:33.353 [2024-11-20 14:21:12.230590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.353 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.613 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:33.613 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:33.613 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:33.613 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:33.613 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:33.613 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:33.613 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.613 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.613 BaseBdev2 00:10:33.613 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.613 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:33.613 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:33.613 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.613 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:33.613 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.613 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.613 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:33.613 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.613 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.613 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.613 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:33.613 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.613 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.613 [ 00:10:33.613 { 00:10:33.613 "name": "BaseBdev2", 00:10:33.613 "aliases": [ 00:10:33.613 "cf28a5c2-15ed-4da2-a43b-fffdd8b70ea8" 00:10:33.613 ], 00:10:33.613 "product_name": "Malloc disk", 00:10:33.613 "block_size": 512, 00:10:33.613 "num_blocks": 65536, 00:10:33.613 "uuid": "cf28a5c2-15ed-4da2-a43b-fffdd8b70ea8", 00:10:33.613 "assigned_rate_limits": { 00:10:33.613 "rw_ios_per_sec": 0, 00:10:33.613 "rw_mbytes_per_sec": 0, 00:10:33.613 "r_mbytes_per_sec": 0, 00:10:33.613 "w_mbytes_per_sec": 0 00:10:33.613 }, 00:10:33.613 "claimed": false, 00:10:33.613 "zoned": false, 00:10:33.613 "supported_io_types": { 00:10:33.613 "read": true, 00:10:33.613 "write": true, 00:10:33.613 "unmap": true, 00:10:33.613 "flush": true, 00:10:33.613 "reset": true, 00:10:33.613 "nvme_admin": false, 00:10:33.613 "nvme_io": false, 00:10:33.613 "nvme_io_md": false, 00:10:33.613 "write_zeroes": true, 00:10:33.613 "zcopy": true, 00:10:33.613 "get_zone_info": false, 00:10:33.613 "zone_management": false, 00:10:33.613 "zone_append": false, 00:10:33.613 "compare": false, 00:10:33.613 "compare_and_write": false, 00:10:33.613 "abort": true, 00:10:33.613 "seek_hole": false, 00:10:33.613 "seek_data": false, 00:10:33.613 "copy": true, 00:10:33.613 "nvme_iov_md": false 00:10:33.613 }, 00:10:33.613 "memory_domains": [ 00:10:33.613 { 00:10:33.613 "dma_device_id": "system", 00:10:33.613 "dma_device_type": 1 00:10:33.613 }, 00:10:33.613 { 00:10:33.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.613 "dma_device_type": 2 00:10:33.613 } 00:10:33.613 ], 00:10:33.614 "driver_specific": {} 00:10:33.614 } 00:10:33.614 ] 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.614 BaseBdev3 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.614 [ 00:10:33.614 { 00:10:33.614 "name": "BaseBdev3", 00:10:33.614 "aliases": [ 00:10:33.614 "be6e2a07-f8c7-4dad-ab9f-d3e1de6c11e2" 00:10:33.614 ], 00:10:33.614 "product_name": "Malloc disk", 00:10:33.614 "block_size": 512, 00:10:33.614 "num_blocks": 65536, 00:10:33.614 "uuid": "be6e2a07-f8c7-4dad-ab9f-d3e1de6c11e2", 00:10:33.614 "assigned_rate_limits": { 00:10:33.614 "rw_ios_per_sec": 0, 00:10:33.614 "rw_mbytes_per_sec": 0, 00:10:33.614 "r_mbytes_per_sec": 0, 00:10:33.614 "w_mbytes_per_sec": 0 00:10:33.614 }, 00:10:33.614 "claimed": false, 00:10:33.614 "zoned": false, 00:10:33.614 "supported_io_types": { 00:10:33.614 "read": true, 00:10:33.614 "write": true, 00:10:33.614 "unmap": true, 00:10:33.614 "flush": true, 00:10:33.614 "reset": true, 00:10:33.614 "nvme_admin": false, 00:10:33.614 "nvme_io": false, 00:10:33.614 "nvme_io_md": false, 00:10:33.614 "write_zeroes": true, 00:10:33.614 "zcopy": true, 00:10:33.614 "get_zone_info": false, 00:10:33.614 "zone_management": false, 00:10:33.614 "zone_append": false, 00:10:33.614 "compare": false, 00:10:33.614 "compare_and_write": false, 00:10:33.614 "abort": true, 00:10:33.614 "seek_hole": false, 00:10:33.614 "seek_data": false, 00:10:33.614 "copy": true, 00:10:33.614 "nvme_iov_md": false 00:10:33.614 }, 00:10:33.614 "memory_domains": [ 00:10:33.614 { 00:10:33.614 "dma_device_id": "system", 00:10:33.614 "dma_device_type": 1 00:10:33.614 }, 00:10:33.614 { 00:10:33.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.614 "dma_device_type": 2 00:10:33.614 } 00:10:33.614 ], 00:10:33.614 "driver_specific": {} 00:10:33.614 } 00:10:33.614 ] 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.614 [2024-11-20 14:21:12.528970] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:33.614 [2024-11-20 14:21:12.529038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:33.614 [2024-11-20 14:21:12.529070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.614 [2024-11-20 14:21:12.531423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.614 "name": "Existed_Raid", 00:10:33.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.614 "strip_size_kb": 64, 00:10:33.614 "state": "configuring", 00:10:33.614 "raid_level": "concat", 00:10:33.614 "superblock": false, 00:10:33.614 "num_base_bdevs": 3, 00:10:33.614 "num_base_bdevs_discovered": 2, 00:10:33.614 "num_base_bdevs_operational": 3, 00:10:33.614 "base_bdevs_list": [ 00:10:33.614 { 00:10:33.614 "name": "BaseBdev1", 00:10:33.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.614 "is_configured": false, 00:10:33.614 "data_offset": 0, 00:10:33.614 "data_size": 0 00:10:33.614 }, 00:10:33.614 { 00:10:33.614 "name": "BaseBdev2", 00:10:33.614 "uuid": "cf28a5c2-15ed-4da2-a43b-fffdd8b70ea8", 00:10:33.614 "is_configured": true, 00:10:33.614 "data_offset": 0, 00:10:33.614 "data_size": 65536 00:10:33.614 }, 00:10:33.614 { 00:10:33.614 "name": "BaseBdev3", 00:10:33.614 "uuid": "be6e2a07-f8c7-4dad-ab9f-d3e1de6c11e2", 00:10:33.614 "is_configured": true, 00:10:33.614 "data_offset": 0, 00:10:33.614 "data_size": 65536 00:10:33.614 } 00:10:33.614 ] 00:10:33.614 }' 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.614 14:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.183 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:34.183 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.183 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.183 [2024-11-20 14:21:13.057155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:34.183 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.183 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:34.183 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.183 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.183 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.183 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.183 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.183 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.183 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.183 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.183 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.183 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.183 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.183 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.183 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.183 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.183 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.183 "name": "Existed_Raid", 00:10:34.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.183 "strip_size_kb": 64, 00:10:34.183 "state": "configuring", 00:10:34.183 "raid_level": "concat", 00:10:34.183 "superblock": false, 00:10:34.183 "num_base_bdevs": 3, 00:10:34.183 "num_base_bdevs_discovered": 1, 00:10:34.183 "num_base_bdevs_operational": 3, 00:10:34.183 "base_bdevs_list": [ 00:10:34.183 { 00:10:34.183 "name": "BaseBdev1", 00:10:34.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.183 "is_configured": false, 00:10:34.183 "data_offset": 0, 00:10:34.183 "data_size": 0 00:10:34.183 }, 00:10:34.183 { 00:10:34.183 "name": null, 00:10:34.183 "uuid": "cf28a5c2-15ed-4da2-a43b-fffdd8b70ea8", 00:10:34.183 "is_configured": false, 00:10:34.183 "data_offset": 0, 00:10:34.183 "data_size": 65536 00:10:34.183 }, 00:10:34.183 { 00:10:34.183 "name": "BaseBdev3", 00:10:34.183 "uuid": "be6e2a07-f8c7-4dad-ab9f-d3e1de6c11e2", 00:10:34.183 "is_configured": true, 00:10:34.183 "data_offset": 0, 00:10:34.183 "data_size": 65536 00:10:34.183 } 00:10:34.183 ] 00:10:34.183 }' 00:10:34.183 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.183 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.751 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:34.751 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.751 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.751 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.751 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.751 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:34.751 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:34.751 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.751 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.751 [2024-11-20 14:21:13.679367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.751 BaseBdev1 00:10:34.751 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.751 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:34.751 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:34.751 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:34.751 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:34.751 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:34.751 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:34.751 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:34.751 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.751 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.751 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.751 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:34.751 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.751 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.751 [ 00:10:34.751 { 00:10:34.751 "name": "BaseBdev1", 00:10:34.751 "aliases": [ 00:10:34.751 "26eda37c-0ceb-4da0-ab7f-811e4ecf150f" 00:10:34.751 ], 00:10:34.751 "product_name": "Malloc disk", 00:10:34.751 "block_size": 512, 00:10:34.751 "num_blocks": 65536, 00:10:34.751 "uuid": "26eda37c-0ceb-4da0-ab7f-811e4ecf150f", 00:10:34.751 "assigned_rate_limits": { 00:10:34.751 "rw_ios_per_sec": 0, 00:10:34.751 "rw_mbytes_per_sec": 0, 00:10:34.751 "r_mbytes_per_sec": 0, 00:10:34.751 "w_mbytes_per_sec": 0 00:10:34.751 }, 00:10:34.752 "claimed": true, 00:10:34.752 "claim_type": "exclusive_write", 00:10:34.752 "zoned": false, 00:10:34.752 "supported_io_types": { 00:10:34.752 "read": true, 00:10:34.752 "write": true, 00:10:34.752 "unmap": true, 00:10:34.752 "flush": true, 00:10:34.752 "reset": true, 00:10:34.752 "nvme_admin": false, 00:10:34.752 "nvme_io": false, 00:10:34.752 "nvme_io_md": false, 00:10:34.752 "write_zeroes": true, 00:10:34.752 "zcopy": true, 00:10:34.752 "get_zone_info": false, 00:10:34.752 "zone_management": false, 00:10:34.752 "zone_append": false, 00:10:34.752 "compare": false, 00:10:34.752 "compare_and_write": false, 00:10:34.752 "abort": true, 00:10:34.752 "seek_hole": false, 00:10:34.752 "seek_data": false, 00:10:34.752 "copy": true, 00:10:34.752 "nvme_iov_md": false 00:10:34.752 }, 00:10:34.752 "memory_domains": [ 00:10:34.752 { 00:10:34.752 "dma_device_id": "system", 00:10:34.752 "dma_device_type": 1 00:10:34.752 }, 00:10:34.752 { 00:10:34.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.752 "dma_device_type": 2 00:10:34.752 } 00:10:34.752 ], 00:10:34.752 "driver_specific": {} 00:10:34.752 } 00:10:34.752 ] 00:10:34.752 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.752 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:34.752 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:34.752 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.752 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.752 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.752 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.752 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.752 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.752 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.752 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.752 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.752 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.752 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.752 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.752 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.011 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.011 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.011 "name": "Existed_Raid", 00:10:35.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.011 "strip_size_kb": 64, 00:10:35.011 "state": "configuring", 00:10:35.011 "raid_level": "concat", 00:10:35.011 "superblock": false, 00:10:35.011 "num_base_bdevs": 3, 00:10:35.011 "num_base_bdevs_discovered": 2, 00:10:35.011 "num_base_bdevs_operational": 3, 00:10:35.011 "base_bdevs_list": [ 00:10:35.011 { 00:10:35.011 "name": "BaseBdev1", 00:10:35.011 "uuid": "26eda37c-0ceb-4da0-ab7f-811e4ecf150f", 00:10:35.011 "is_configured": true, 00:10:35.011 "data_offset": 0, 00:10:35.011 "data_size": 65536 00:10:35.011 }, 00:10:35.011 { 00:10:35.011 "name": null, 00:10:35.011 "uuid": "cf28a5c2-15ed-4da2-a43b-fffdd8b70ea8", 00:10:35.011 "is_configured": false, 00:10:35.011 "data_offset": 0, 00:10:35.011 "data_size": 65536 00:10:35.011 }, 00:10:35.011 { 00:10:35.011 "name": "BaseBdev3", 00:10:35.011 "uuid": "be6e2a07-f8c7-4dad-ab9f-d3e1de6c11e2", 00:10:35.011 "is_configured": true, 00:10:35.011 "data_offset": 0, 00:10:35.011 "data_size": 65536 00:10:35.011 } 00:10:35.011 ] 00:10:35.011 }' 00:10:35.011 14:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.011 14:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.269 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.269 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:35.269 14:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.270 14:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.270 14:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.529 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:35.529 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:35.529 14:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.529 14:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.529 [2024-11-20 14:21:14.275564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:35.529 14:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.529 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:35.529 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.529 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.529 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.529 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.529 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:35.529 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.529 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.529 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.529 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.529 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.529 14:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.529 14:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.529 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.529 14:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.529 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.529 "name": "Existed_Raid", 00:10:35.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.529 "strip_size_kb": 64, 00:10:35.529 "state": "configuring", 00:10:35.529 "raid_level": "concat", 00:10:35.529 "superblock": false, 00:10:35.529 "num_base_bdevs": 3, 00:10:35.529 "num_base_bdevs_discovered": 1, 00:10:35.529 "num_base_bdevs_operational": 3, 00:10:35.529 "base_bdevs_list": [ 00:10:35.529 { 00:10:35.529 "name": "BaseBdev1", 00:10:35.529 "uuid": "26eda37c-0ceb-4da0-ab7f-811e4ecf150f", 00:10:35.529 "is_configured": true, 00:10:35.529 "data_offset": 0, 00:10:35.529 "data_size": 65536 00:10:35.529 }, 00:10:35.529 { 00:10:35.529 "name": null, 00:10:35.529 "uuid": "cf28a5c2-15ed-4da2-a43b-fffdd8b70ea8", 00:10:35.529 "is_configured": false, 00:10:35.529 "data_offset": 0, 00:10:35.529 "data_size": 65536 00:10:35.529 }, 00:10:35.529 { 00:10:35.529 "name": null, 00:10:35.529 "uuid": "be6e2a07-f8c7-4dad-ab9f-d3e1de6c11e2", 00:10:35.529 "is_configured": false, 00:10:35.529 "data_offset": 0, 00:10:35.529 "data_size": 65536 00:10:35.529 } 00:10:35.529 ] 00:10:35.529 }' 00:10:35.529 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.529 14:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.098 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.098 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:36.098 14:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.098 14:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.098 14:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.098 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:36.098 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:36.098 14:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.098 14:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.098 [2024-11-20 14:21:14.839733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.098 14:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.098 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:36.098 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.098 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.098 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.098 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.098 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.098 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.098 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.098 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.099 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.099 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.099 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.099 14:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.099 14:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.099 14:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.099 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.099 "name": "Existed_Raid", 00:10:36.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.099 "strip_size_kb": 64, 00:10:36.099 "state": "configuring", 00:10:36.099 "raid_level": "concat", 00:10:36.099 "superblock": false, 00:10:36.099 "num_base_bdevs": 3, 00:10:36.099 "num_base_bdevs_discovered": 2, 00:10:36.099 "num_base_bdevs_operational": 3, 00:10:36.099 "base_bdevs_list": [ 00:10:36.099 { 00:10:36.099 "name": "BaseBdev1", 00:10:36.099 "uuid": "26eda37c-0ceb-4da0-ab7f-811e4ecf150f", 00:10:36.099 "is_configured": true, 00:10:36.099 "data_offset": 0, 00:10:36.099 "data_size": 65536 00:10:36.099 }, 00:10:36.099 { 00:10:36.099 "name": null, 00:10:36.099 "uuid": "cf28a5c2-15ed-4da2-a43b-fffdd8b70ea8", 00:10:36.099 "is_configured": false, 00:10:36.099 "data_offset": 0, 00:10:36.099 "data_size": 65536 00:10:36.099 }, 00:10:36.099 { 00:10:36.099 "name": "BaseBdev3", 00:10:36.099 "uuid": "be6e2a07-f8c7-4dad-ab9f-d3e1de6c11e2", 00:10:36.099 "is_configured": true, 00:10:36.099 "data_offset": 0, 00:10:36.099 "data_size": 65536 00:10:36.099 } 00:10:36.099 ] 00:10:36.099 }' 00:10:36.099 14:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.099 14:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.667 [2024-11-20 14:21:15.423912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.667 "name": "Existed_Raid", 00:10:36.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.667 "strip_size_kb": 64, 00:10:36.667 "state": "configuring", 00:10:36.667 "raid_level": "concat", 00:10:36.667 "superblock": false, 00:10:36.667 "num_base_bdevs": 3, 00:10:36.667 "num_base_bdevs_discovered": 1, 00:10:36.667 "num_base_bdevs_operational": 3, 00:10:36.667 "base_bdevs_list": [ 00:10:36.667 { 00:10:36.667 "name": null, 00:10:36.667 "uuid": "26eda37c-0ceb-4da0-ab7f-811e4ecf150f", 00:10:36.667 "is_configured": false, 00:10:36.667 "data_offset": 0, 00:10:36.667 "data_size": 65536 00:10:36.667 }, 00:10:36.667 { 00:10:36.667 "name": null, 00:10:36.667 "uuid": "cf28a5c2-15ed-4da2-a43b-fffdd8b70ea8", 00:10:36.667 "is_configured": false, 00:10:36.667 "data_offset": 0, 00:10:36.667 "data_size": 65536 00:10:36.667 }, 00:10:36.667 { 00:10:36.667 "name": "BaseBdev3", 00:10:36.667 "uuid": "be6e2a07-f8c7-4dad-ab9f-d3e1de6c11e2", 00:10:36.667 "is_configured": true, 00:10:36.667 "data_offset": 0, 00:10:36.667 "data_size": 65536 00:10:36.667 } 00:10:36.667 ] 00:10:36.667 }' 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.667 14:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.236 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.236 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:37.236 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.236 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.236 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.236 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:37.236 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:37.236 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.236 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.236 [2024-11-20 14:21:16.094406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:37.236 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.236 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:37.236 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.236 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.236 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.236 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.236 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.237 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.237 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.237 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.237 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.237 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.237 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.237 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.237 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.237 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.237 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.237 "name": "Existed_Raid", 00:10:37.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.237 "strip_size_kb": 64, 00:10:37.237 "state": "configuring", 00:10:37.237 "raid_level": "concat", 00:10:37.237 "superblock": false, 00:10:37.237 "num_base_bdevs": 3, 00:10:37.237 "num_base_bdevs_discovered": 2, 00:10:37.237 "num_base_bdevs_operational": 3, 00:10:37.237 "base_bdevs_list": [ 00:10:37.237 { 00:10:37.237 "name": null, 00:10:37.237 "uuid": "26eda37c-0ceb-4da0-ab7f-811e4ecf150f", 00:10:37.237 "is_configured": false, 00:10:37.237 "data_offset": 0, 00:10:37.237 "data_size": 65536 00:10:37.237 }, 00:10:37.237 { 00:10:37.237 "name": "BaseBdev2", 00:10:37.237 "uuid": "cf28a5c2-15ed-4da2-a43b-fffdd8b70ea8", 00:10:37.237 "is_configured": true, 00:10:37.237 "data_offset": 0, 00:10:37.237 "data_size": 65536 00:10:37.237 }, 00:10:37.237 { 00:10:37.237 "name": "BaseBdev3", 00:10:37.237 "uuid": "be6e2a07-f8c7-4dad-ab9f-d3e1de6c11e2", 00:10:37.237 "is_configured": true, 00:10:37.237 "data_offset": 0, 00:10:37.237 "data_size": 65536 00:10:37.237 } 00:10:37.237 ] 00:10:37.237 }' 00:10:37.237 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.237 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.804 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 26eda37c-0ceb-4da0-ab7f-811e4ecf150f 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.805 [2024-11-20 14:21:16.748369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:37.805 [2024-11-20 14:21:16.748422] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:37.805 [2024-11-20 14:21:16.748437] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:37.805 [2024-11-20 14:21:16.748753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:37.805 [2024-11-20 14:21:16.748954] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:37.805 [2024-11-20 14:21:16.748979] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:37.805 [2024-11-20 14:21:16.749276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.805 NewBaseBdev 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.805 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.805 [ 00:10:37.805 { 00:10:37.805 "name": "NewBaseBdev", 00:10:37.805 "aliases": [ 00:10:37.805 "26eda37c-0ceb-4da0-ab7f-811e4ecf150f" 00:10:37.805 ], 00:10:37.805 "product_name": "Malloc disk", 00:10:37.805 "block_size": 512, 00:10:37.805 "num_blocks": 65536, 00:10:37.805 "uuid": "26eda37c-0ceb-4da0-ab7f-811e4ecf150f", 00:10:37.805 "assigned_rate_limits": { 00:10:37.805 "rw_ios_per_sec": 0, 00:10:37.805 "rw_mbytes_per_sec": 0, 00:10:37.805 "r_mbytes_per_sec": 0, 00:10:37.805 "w_mbytes_per_sec": 0 00:10:37.805 }, 00:10:37.805 "claimed": true, 00:10:37.805 "claim_type": "exclusive_write", 00:10:37.805 "zoned": false, 00:10:37.805 "supported_io_types": { 00:10:37.805 "read": true, 00:10:37.805 "write": true, 00:10:37.805 "unmap": true, 00:10:37.805 "flush": true, 00:10:37.805 "reset": true, 00:10:37.805 "nvme_admin": false, 00:10:37.805 "nvme_io": false, 00:10:37.805 "nvme_io_md": false, 00:10:37.805 "write_zeroes": true, 00:10:37.805 "zcopy": true, 00:10:37.805 "get_zone_info": false, 00:10:37.805 "zone_management": false, 00:10:37.805 "zone_append": false, 00:10:37.805 "compare": false, 00:10:37.805 "compare_and_write": false, 00:10:37.805 "abort": true, 00:10:37.805 "seek_hole": false, 00:10:37.805 "seek_data": false, 00:10:37.805 "copy": true, 00:10:37.805 "nvme_iov_md": false 00:10:37.805 }, 00:10:38.064 "memory_domains": [ 00:10:38.064 { 00:10:38.064 "dma_device_id": "system", 00:10:38.064 "dma_device_type": 1 00:10:38.064 }, 00:10:38.064 { 00:10:38.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.064 "dma_device_type": 2 00:10:38.064 } 00:10:38.064 ], 00:10:38.064 "driver_specific": {} 00:10:38.064 } 00:10:38.064 ] 00:10:38.064 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.064 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:38.064 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:38.064 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.064 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.064 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.064 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.064 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.064 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.064 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.064 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.065 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.065 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.065 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.065 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.065 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.065 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.065 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.065 "name": "Existed_Raid", 00:10:38.065 "uuid": "8ce628d7-552f-4ddd-b27c-f67b7a39ad1a", 00:10:38.065 "strip_size_kb": 64, 00:10:38.065 "state": "online", 00:10:38.065 "raid_level": "concat", 00:10:38.065 "superblock": false, 00:10:38.065 "num_base_bdevs": 3, 00:10:38.065 "num_base_bdevs_discovered": 3, 00:10:38.065 "num_base_bdevs_operational": 3, 00:10:38.065 "base_bdevs_list": [ 00:10:38.065 { 00:10:38.065 "name": "NewBaseBdev", 00:10:38.065 "uuid": "26eda37c-0ceb-4da0-ab7f-811e4ecf150f", 00:10:38.065 "is_configured": true, 00:10:38.065 "data_offset": 0, 00:10:38.065 "data_size": 65536 00:10:38.065 }, 00:10:38.065 { 00:10:38.065 "name": "BaseBdev2", 00:10:38.065 "uuid": "cf28a5c2-15ed-4da2-a43b-fffdd8b70ea8", 00:10:38.065 "is_configured": true, 00:10:38.065 "data_offset": 0, 00:10:38.065 "data_size": 65536 00:10:38.065 }, 00:10:38.065 { 00:10:38.065 "name": "BaseBdev3", 00:10:38.065 "uuid": "be6e2a07-f8c7-4dad-ab9f-d3e1de6c11e2", 00:10:38.065 "is_configured": true, 00:10:38.065 "data_offset": 0, 00:10:38.065 "data_size": 65536 00:10:38.065 } 00:10:38.065 ] 00:10:38.065 }' 00:10:38.065 14:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.065 14:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.324 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:38.324 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:38.324 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:38.324 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:38.324 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:38.324 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:38.324 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:38.324 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:38.324 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.324 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.582 [2024-11-20 14:21:17.304938] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.582 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.582 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:38.582 "name": "Existed_Raid", 00:10:38.582 "aliases": [ 00:10:38.582 "8ce628d7-552f-4ddd-b27c-f67b7a39ad1a" 00:10:38.582 ], 00:10:38.582 "product_name": "Raid Volume", 00:10:38.582 "block_size": 512, 00:10:38.582 "num_blocks": 196608, 00:10:38.582 "uuid": "8ce628d7-552f-4ddd-b27c-f67b7a39ad1a", 00:10:38.582 "assigned_rate_limits": { 00:10:38.582 "rw_ios_per_sec": 0, 00:10:38.582 "rw_mbytes_per_sec": 0, 00:10:38.583 "r_mbytes_per_sec": 0, 00:10:38.583 "w_mbytes_per_sec": 0 00:10:38.583 }, 00:10:38.583 "claimed": false, 00:10:38.583 "zoned": false, 00:10:38.583 "supported_io_types": { 00:10:38.583 "read": true, 00:10:38.583 "write": true, 00:10:38.583 "unmap": true, 00:10:38.583 "flush": true, 00:10:38.583 "reset": true, 00:10:38.583 "nvme_admin": false, 00:10:38.583 "nvme_io": false, 00:10:38.583 "nvme_io_md": false, 00:10:38.583 "write_zeroes": true, 00:10:38.583 "zcopy": false, 00:10:38.583 "get_zone_info": false, 00:10:38.583 "zone_management": false, 00:10:38.583 "zone_append": false, 00:10:38.583 "compare": false, 00:10:38.583 "compare_and_write": false, 00:10:38.583 "abort": false, 00:10:38.583 "seek_hole": false, 00:10:38.583 "seek_data": false, 00:10:38.583 "copy": false, 00:10:38.583 "nvme_iov_md": false 00:10:38.583 }, 00:10:38.583 "memory_domains": [ 00:10:38.583 { 00:10:38.583 "dma_device_id": "system", 00:10:38.583 "dma_device_type": 1 00:10:38.583 }, 00:10:38.583 { 00:10:38.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.583 "dma_device_type": 2 00:10:38.583 }, 00:10:38.583 { 00:10:38.583 "dma_device_id": "system", 00:10:38.583 "dma_device_type": 1 00:10:38.583 }, 00:10:38.583 { 00:10:38.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.583 "dma_device_type": 2 00:10:38.583 }, 00:10:38.583 { 00:10:38.583 "dma_device_id": "system", 00:10:38.583 "dma_device_type": 1 00:10:38.583 }, 00:10:38.583 { 00:10:38.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.583 "dma_device_type": 2 00:10:38.583 } 00:10:38.583 ], 00:10:38.583 "driver_specific": { 00:10:38.583 "raid": { 00:10:38.583 "uuid": "8ce628d7-552f-4ddd-b27c-f67b7a39ad1a", 00:10:38.583 "strip_size_kb": 64, 00:10:38.583 "state": "online", 00:10:38.583 "raid_level": "concat", 00:10:38.583 "superblock": false, 00:10:38.583 "num_base_bdevs": 3, 00:10:38.583 "num_base_bdevs_discovered": 3, 00:10:38.583 "num_base_bdevs_operational": 3, 00:10:38.583 "base_bdevs_list": [ 00:10:38.583 { 00:10:38.583 "name": "NewBaseBdev", 00:10:38.583 "uuid": "26eda37c-0ceb-4da0-ab7f-811e4ecf150f", 00:10:38.583 "is_configured": true, 00:10:38.583 "data_offset": 0, 00:10:38.583 "data_size": 65536 00:10:38.583 }, 00:10:38.583 { 00:10:38.583 "name": "BaseBdev2", 00:10:38.583 "uuid": "cf28a5c2-15ed-4da2-a43b-fffdd8b70ea8", 00:10:38.583 "is_configured": true, 00:10:38.583 "data_offset": 0, 00:10:38.583 "data_size": 65536 00:10:38.583 }, 00:10:38.583 { 00:10:38.583 "name": "BaseBdev3", 00:10:38.583 "uuid": "be6e2a07-f8c7-4dad-ab9f-d3e1de6c11e2", 00:10:38.583 "is_configured": true, 00:10:38.583 "data_offset": 0, 00:10:38.583 "data_size": 65536 00:10:38.583 } 00:10:38.583 ] 00:10:38.583 } 00:10:38.583 } 00:10:38.583 }' 00:10:38.583 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:38.583 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:38.583 BaseBdev2 00:10:38.583 BaseBdev3' 00:10:38.583 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.583 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:38.583 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.583 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.583 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:38.583 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.583 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.583 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.583 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.583 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.583 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.583 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:38.583 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.583 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.583 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.583 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.843 [2024-11-20 14:21:17.640633] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:38.843 [2024-11-20 14:21:17.640670] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:38.843 [2024-11-20 14:21:17.640769] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.843 [2024-11-20 14:21:17.640849] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:38.843 [2024-11-20 14:21:17.640868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65635 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65635 ']' 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65635 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65635 00:10:38.843 killing process with pid 65635 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65635' 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65635 00:10:38.843 [2024-11-20 14:21:17.682365] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:38.843 14:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65635 00:10:39.102 [2024-11-20 14:21:17.950286] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:40.477 ************************************ 00:10:40.477 END TEST raid_state_function_test 00:10:40.477 ************************************ 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:40.477 00:10:40.477 real 0m11.946s 00:10:40.477 user 0m19.956s 00:10:40.477 sys 0m1.565s 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.477 14:21:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:10:40.477 14:21:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:40.477 14:21:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.477 14:21:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:40.477 ************************************ 00:10:40.477 START TEST raid_state_function_test_sb 00:10:40.477 ************************************ 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66267 00:10:40.477 Process raid pid: 66267 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66267' 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66267 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66267 ']' 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.477 14:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.477 [2024-11-20 14:21:19.172812] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:10:40.477 [2024-11-20 14:21:19.172964] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.477 [2024-11-20 14:21:19.349329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.735 [2024-11-20 14:21:19.480538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.735 [2024-11-20 14:21:19.693931] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.735 [2024-11-20 14:21:19.693978] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.301 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.301 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:41.301 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:41.301 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.301 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.301 [2024-11-20 14:21:20.148601] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.301 [2024-11-20 14:21:20.148688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.301 [2024-11-20 14:21:20.148705] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:41.301 [2024-11-20 14:21:20.148719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:41.301 [2024-11-20 14:21:20.148728] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:41.301 [2024-11-20 14:21:20.148757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:41.301 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.301 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:41.301 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.301 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.301 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.301 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.301 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.301 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.301 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.301 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.301 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.301 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.301 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.301 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.301 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.301 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.301 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.301 "name": "Existed_Raid", 00:10:41.301 "uuid": "b026e41b-f122-4f09-a384-d5613b718c2d", 00:10:41.301 "strip_size_kb": 64, 00:10:41.301 "state": "configuring", 00:10:41.301 "raid_level": "concat", 00:10:41.301 "superblock": true, 00:10:41.301 "num_base_bdevs": 3, 00:10:41.301 "num_base_bdevs_discovered": 0, 00:10:41.301 "num_base_bdevs_operational": 3, 00:10:41.301 "base_bdevs_list": [ 00:10:41.301 { 00:10:41.301 "name": "BaseBdev1", 00:10:41.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.301 "is_configured": false, 00:10:41.301 "data_offset": 0, 00:10:41.301 "data_size": 0 00:10:41.301 }, 00:10:41.301 { 00:10:41.301 "name": "BaseBdev2", 00:10:41.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.301 "is_configured": false, 00:10:41.301 "data_offset": 0, 00:10:41.301 "data_size": 0 00:10:41.301 }, 00:10:41.301 { 00:10:41.301 "name": "BaseBdev3", 00:10:41.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.301 "is_configured": false, 00:10:41.301 "data_offset": 0, 00:10:41.301 "data_size": 0 00:10:41.301 } 00:10:41.301 ] 00:10:41.301 }' 00:10:41.301 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.301 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.870 [2024-11-20 14:21:20.676744] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:41.870 [2024-11-20 14:21:20.676820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.870 [2024-11-20 14:21:20.684728] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.870 [2024-11-20 14:21:20.684807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.870 [2024-11-20 14:21:20.684820] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:41.870 [2024-11-20 14:21:20.684835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:41.870 [2024-11-20 14:21:20.684845] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:41.870 [2024-11-20 14:21:20.684858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.870 [2024-11-20 14:21:20.731957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:41.870 BaseBdev1 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.870 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.870 [ 00:10:41.870 { 00:10:41.870 "name": "BaseBdev1", 00:10:41.870 "aliases": [ 00:10:41.870 "9b261cca-30a1-421c-b83c-2a5659a52f1e" 00:10:41.870 ], 00:10:41.870 "product_name": "Malloc disk", 00:10:41.870 "block_size": 512, 00:10:41.870 "num_blocks": 65536, 00:10:41.870 "uuid": "9b261cca-30a1-421c-b83c-2a5659a52f1e", 00:10:41.870 "assigned_rate_limits": { 00:10:41.870 "rw_ios_per_sec": 0, 00:10:41.870 "rw_mbytes_per_sec": 0, 00:10:41.870 "r_mbytes_per_sec": 0, 00:10:41.870 "w_mbytes_per_sec": 0 00:10:41.870 }, 00:10:41.870 "claimed": true, 00:10:41.870 "claim_type": "exclusive_write", 00:10:41.870 "zoned": false, 00:10:41.871 "supported_io_types": { 00:10:41.871 "read": true, 00:10:41.871 "write": true, 00:10:41.871 "unmap": true, 00:10:41.871 "flush": true, 00:10:41.871 "reset": true, 00:10:41.871 "nvme_admin": false, 00:10:41.871 "nvme_io": false, 00:10:41.871 "nvme_io_md": false, 00:10:41.871 "write_zeroes": true, 00:10:41.871 "zcopy": true, 00:10:41.871 "get_zone_info": false, 00:10:41.871 "zone_management": false, 00:10:41.871 "zone_append": false, 00:10:41.871 "compare": false, 00:10:41.871 "compare_and_write": false, 00:10:41.871 "abort": true, 00:10:41.871 "seek_hole": false, 00:10:41.871 "seek_data": false, 00:10:41.871 "copy": true, 00:10:41.871 "nvme_iov_md": false 00:10:41.871 }, 00:10:41.871 "memory_domains": [ 00:10:41.871 { 00:10:41.871 "dma_device_id": "system", 00:10:41.871 "dma_device_type": 1 00:10:41.871 }, 00:10:41.871 { 00:10:41.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.871 "dma_device_type": 2 00:10:41.871 } 00:10:41.871 ], 00:10:41.871 "driver_specific": {} 00:10:41.871 } 00:10:41.871 ] 00:10:41.871 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.871 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:41.871 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:41.871 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.871 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.871 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.871 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.871 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.871 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.871 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.871 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.871 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.871 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.871 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.871 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.871 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.871 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.871 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.871 "name": "Existed_Raid", 00:10:41.871 "uuid": "46730d95-04f8-4c4b-8e83-054309178812", 00:10:41.871 "strip_size_kb": 64, 00:10:41.871 "state": "configuring", 00:10:41.871 "raid_level": "concat", 00:10:41.871 "superblock": true, 00:10:41.871 "num_base_bdevs": 3, 00:10:41.871 "num_base_bdevs_discovered": 1, 00:10:41.871 "num_base_bdevs_operational": 3, 00:10:41.871 "base_bdevs_list": [ 00:10:41.871 { 00:10:41.871 "name": "BaseBdev1", 00:10:41.871 "uuid": "9b261cca-30a1-421c-b83c-2a5659a52f1e", 00:10:41.871 "is_configured": true, 00:10:41.871 "data_offset": 2048, 00:10:41.871 "data_size": 63488 00:10:41.871 }, 00:10:41.871 { 00:10:41.871 "name": "BaseBdev2", 00:10:41.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.871 "is_configured": false, 00:10:41.871 "data_offset": 0, 00:10:41.871 "data_size": 0 00:10:41.871 }, 00:10:41.871 { 00:10:41.871 "name": "BaseBdev3", 00:10:41.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.871 "is_configured": false, 00:10:41.871 "data_offset": 0, 00:10:41.871 "data_size": 0 00:10:41.871 } 00:10:41.871 ] 00:10:41.871 }' 00:10:41.871 14:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.871 14:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.439 [2024-11-20 14:21:21.272233] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:42.439 [2024-11-20 14:21:21.272296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.439 [2024-11-20 14:21:21.280264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.439 [2024-11-20 14:21:21.282559] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:42.439 [2024-11-20 14:21:21.282621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:42.439 [2024-11-20 14:21:21.282636] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:42.439 [2024-11-20 14:21:21.282650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.439 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.439 "name": "Existed_Raid", 00:10:42.439 "uuid": "dd08f2cb-04b0-4d50-a646-93d31690f38c", 00:10:42.439 "strip_size_kb": 64, 00:10:42.439 "state": "configuring", 00:10:42.439 "raid_level": "concat", 00:10:42.439 "superblock": true, 00:10:42.440 "num_base_bdevs": 3, 00:10:42.440 "num_base_bdevs_discovered": 1, 00:10:42.440 "num_base_bdevs_operational": 3, 00:10:42.440 "base_bdevs_list": [ 00:10:42.440 { 00:10:42.440 "name": "BaseBdev1", 00:10:42.440 "uuid": "9b261cca-30a1-421c-b83c-2a5659a52f1e", 00:10:42.440 "is_configured": true, 00:10:42.440 "data_offset": 2048, 00:10:42.440 "data_size": 63488 00:10:42.440 }, 00:10:42.440 { 00:10:42.440 "name": "BaseBdev2", 00:10:42.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.440 "is_configured": false, 00:10:42.440 "data_offset": 0, 00:10:42.440 "data_size": 0 00:10:42.440 }, 00:10:42.440 { 00:10:42.440 "name": "BaseBdev3", 00:10:42.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.440 "is_configured": false, 00:10:42.440 "data_offset": 0, 00:10:42.440 "data_size": 0 00:10:42.440 } 00:10:42.440 ] 00:10:42.440 }' 00:10:42.440 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.440 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.007 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:43.007 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.007 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.007 [2024-11-20 14:21:21.839838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:43.007 BaseBdev2 00:10:43.007 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.007 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:43.007 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:43.007 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.007 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:43.007 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.007 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.007 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:43.007 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.007 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.007 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.007 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:43.007 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.007 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.007 [ 00:10:43.007 { 00:10:43.007 "name": "BaseBdev2", 00:10:43.007 "aliases": [ 00:10:43.007 "ca56b25a-9e3f-435e-8208-47d671ad584d" 00:10:43.007 ], 00:10:43.007 "product_name": "Malloc disk", 00:10:43.007 "block_size": 512, 00:10:43.007 "num_blocks": 65536, 00:10:43.007 "uuid": "ca56b25a-9e3f-435e-8208-47d671ad584d", 00:10:43.007 "assigned_rate_limits": { 00:10:43.007 "rw_ios_per_sec": 0, 00:10:43.007 "rw_mbytes_per_sec": 0, 00:10:43.007 "r_mbytes_per_sec": 0, 00:10:43.007 "w_mbytes_per_sec": 0 00:10:43.007 }, 00:10:43.007 "claimed": true, 00:10:43.007 "claim_type": "exclusive_write", 00:10:43.007 "zoned": false, 00:10:43.007 "supported_io_types": { 00:10:43.007 "read": true, 00:10:43.007 "write": true, 00:10:43.007 "unmap": true, 00:10:43.008 "flush": true, 00:10:43.008 "reset": true, 00:10:43.008 "nvme_admin": false, 00:10:43.008 "nvme_io": false, 00:10:43.008 "nvme_io_md": false, 00:10:43.008 "write_zeroes": true, 00:10:43.008 "zcopy": true, 00:10:43.008 "get_zone_info": false, 00:10:43.008 "zone_management": false, 00:10:43.008 "zone_append": false, 00:10:43.008 "compare": false, 00:10:43.008 "compare_and_write": false, 00:10:43.008 "abort": true, 00:10:43.008 "seek_hole": false, 00:10:43.008 "seek_data": false, 00:10:43.008 "copy": true, 00:10:43.008 "nvme_iov_md": false 00:10:43.008 }, 00:10:43.008 "memory_domains": [ 00:10:43.008 { 00:10:43.008 "dma_device_id": "system", 00:10:43.008 "dma_device_type": 1 00:10:43.008 }, 00:10:43.008 { 00:10:43.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.008 "dma_device_type": 2 00:10:43.008 } 00:10:43.008 ], 00:10:43.008 "driver_specific": {} 00:10:43.008 } 00:10:43.008 ] 00:10:43.008 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.008 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:43.008 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:43.008 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.008 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:43.008 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.008 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.008 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.008 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.008 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.008 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.008 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.008 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.008 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.008 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.008 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.008 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.008 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.008 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.008 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.008 "name": "Existed_Raid", 00:10:43.008 "uuid": "dd08f2cb-04b0-4d50-a646-93d31690f38c", 00:10:43.008 "strip_size_kb": 64, 00:10:43.008 "state": "configuring", 00:10:43.008 "raid_level": "concat", 00:10:43.008 "superblock": true, 00:10:43.008 "num_base_bdevs": 3, 00:10:43.008 "num_base_bdevs_discovered": 2, 00:10:43.008 "num_base_bdevs_operational": 3, 00:10:43.008 "base_bdevs_list": [ 00:10:43.008 { 00:10:43.008 "name": "BaseBdev1", 00:10:43.008 "uuid": "9b261cca-30a1-421c-b83c-2a5659a52f1e", 00:10:43.008 "is_configured": true, 00:10:43.008 "data_offset": 2048, 00:10:43.008 "data_size": 63488 00:10:43.008 }, 00:10:43.008 { 00:10:43.008 "name": "BaseBdev2", 00:10:43.008 "uuid": "ca56b25a-9e3f-435e-8208-47d671ad584d", 00:10:43.008 "is_configured": true, 00:10:43.008 "data_offset": 2048, 00:10:43.008 "data_size": 63488 00:10:43.008 }, 00:10:43.008 { 00:10:43.008 "name": "BaseBdev3", 00:10:43.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.008 "is_configured": false, 00:10:43.008 "data_offset": 0, 00:10:43.008 "data_size": 0 00:10:43.008 } 00:10:43.008 ] 00:10:43.008 }' 00:10:43.008 14:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.008 14:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.574 [2024-11-20 14:21:22.427635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:43.574 [2024-11-20 14:21:22.428178] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:43.574 [2024-11-20 14:21:22.428330] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:43.574 BaseBdev3 00:10:43.574 [2024-11-20 14:21:22.428837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:43.574 [2024-11-20 14:21:22.429185] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:43.574 [2024-11-20 14:21:22.429322] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, ra 14:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.574 id_bdev 0x617000007e80 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:43.574 [2024-11-20 14:21:22.429703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.574 [ 00:10:43.574 { 00:10:43.574 "name": "BaseBdev3", 00:10:43.574 "aliases": [ 00:10:43.574 "0fea4a90-fb01-4b9a-b7fd-49430c22af49" 00:10:43.574 ], 00:10:43.574 "product_name": "Malloc disk", 00:10:43.574 "block_size": 512, 00:10:43.574 "num_blocks": 65536, 00:10:43.574 "uuid": "0fea4a90-fb01-4b9a-b7fd-49430c22af49", 00:10:43.574 "assigned_rate_limits": { 00:10:43.574 "rw_ios_per_sec": 0, 00:10:43.574 "rw_mbytes_per_sec": 0, 00:10:43.574 "r_mbytes_per_sec": 0, 00:10:43.574 "w_mbytes_per_sec": 0 00:10:43.574 }, 00:10:43.574 "claimed": true, 00:10:43.574 "claim_type": "exclusive_write", 00:10:43.574 "zoned": false, 00:10:43.574 "supported_io_types": { 00:10:43.574 "read": true, 00:10:43.574 "write": true, 00:10:43.574 "unmap": true, 00:10:43.574 "flush": true, 00:10:43.574 "reset": true, 00:10:43.574 "nvme_admin": false, 00:10:43.574 "nvme_io": false, 00:10:43.574 "nvme_io_md": false, 00:10:43.574 "write_zeroes": true, 00:10:43.574 "zcopy": true, 00:10:43.574 "get_zone_info": false, 00:10:43.574 "zone_management": false, 00:10:43.574 "zone_append": false, 00:10:43.574 "compare": false, 00:10:43.574 "compare_and_write": false, 00:10:43.574 "abort": true, 00:10:43.574 "seek_hole": false, 00:10:43.574 "seek_data": false, 00:10:43.574 "copy": true, 00:10:43.574 "nvme_iov_md": false 00:10:43.574 }, 00:10:43.574 "memory_domains": [ 00:10:43.574 { 00:10:43.574 "dma_device_id": "system", 00:10:43.574 "dma_device_type": 1 00:10:43.574 }, 00:10:43.574 { 00:10:43.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.574 "dma_device_type": 2 00:10:43.574 } 00:10:43.574 ], 00:10:43.574 "driver_specific": {} 00:10:43.574 } 00:10:43.574 ] 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.574 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.574 "name": "Existed_Raid", 00:10:43.574 "uuid": "dd08f2cb-04b0-4d50-a646-93d31690f38c", 00:10:43.574 "strip_size_kb": 64, 00:10:43.574 "state": "online", 00:10:43.574 "raid_level": "concat", 00:10:43.574 "superblock": true, 00:10:43.574 "num_base_bdevs": 3, 00:10:43.574 "num_base_bdevs_discovered": 3, 00:10:43.574 "num_base_bdevs_operational": 3, 00:10:43.574 "base_bdevs_list": [ 00:10:43.574 { 00:10:43.574 "name": "BaseBdev1", 00:10:43.574 "uuid": "9b261cca-30a1-421c-b83c-2a5659a52f1e", 00:10:43.574 "is_configured": true, 00:10:43.574 "data_offset": 2048, 00:10:43.574 "data_size": 63488 00:10:43.574 }, 00:10:43.574 { 00:10:43.574 "name": "BaseBdev2", 00:10:43.574 "uuid": "ca56b25a-9e3f-435e-8208-47d671ad584d", 00:10:43.574 "is_configured": true, 00:10:43.574 "data_offset": 2048, 00:10:43.574 "data_size": 63488 00:10:43.574 }, 00:10:43.574 { 00:10:43.574 "name": "BaseBdev3", 00:10:43.574 "uuid": "0fea4a90-fb01-4b9a-b7fd-49430c22af49", 00:10:43.574 "is_configured": true, 00:10:43.575 "data_offset": 2048, 00:10:43.575 "data_size": 63488 00:10:43.575 } 00:10:43.575 ] 00:10:43.575 }' 00:10:43.575 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.575 14:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.142 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:44.142 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:44.142 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:44.142 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:44.142 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:44.142 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:44.142 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:44.142 14:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.142 14:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:44.142 14:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.142 [2024-11-20 14:21:22.996357] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.142 14:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.142 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:44.142 "name": "Existed_Raid", 00:10:44.142 "aliases": [ 00:10:44.142 "dd08f2cb-04b0-4d50-a646-93d31690f38c" 00:10:44.142 ], 00:10:44.142 "product_name": "Raid Volume", 00:10:44.142 "block_size": 512, 00:10:44.142 "num_blocks": 190464, 00:10:44.142 "uuid": "dd08f2cb-04b0-4d50-a646-93d31690f38c", 00:10:44.142 "assigned_rate_limits": { 00:10:44.142 "rw_ios_per_sec": 0, 00:10:44.142 "rw_mbytes_per_sec": 0, 00:10:44.142 "r_mbytes_per_sec": 0, 00:10:44.142 "w_mbytes_per_sec": 0 00:10:44.142 }, 00:10:44.142 "claimed": false, 00:10:44.142 "zoned": false, 00:10:44.142 "supported_io_types": { 00:10:44.142 "read": true, 00:10:44.142 "write": true, 00:10:44.142 "unmap": true, 00:10:44.142 "flush": true, 00:10:44.142 "reset": true, 00:10:44.142 "nvme_admin": false, 00:10:44.142 "nvme_io": false, 00:10:44.142 "nvme_io_md": false, 00:10:44.142 "write_zeroes": true, 00:10:44.142 "zcopy": false, 00:10:44.142 "get_zone_info": false, 00:10:44.142 "zone_management": false, 00:10:44.142 "zone_append": false, 00:10:44.142 "compare": false, 00:10:44.142 "compare_and_write": false, 00:10:44.142 "abort": false, 00:10:44.142 "seek_hole": false, 00:10:44.142 "seek_data": false, 00:10:44.142 "copy": false, 00:10:44.142 "nvme_iov_md": false 00:10:44.142 }, 00:10:44.142 "memory_domains": [ 00:10:44.142 { 00:10:44.142 "dma_device_id": "system", 00:10:44.142 "dma_device_type": 1 00:10:44.142 }, 00:10:44.142 { 00:10:44.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.142 "dma_device_type": 2 00:10:44.142 }, 00:10:44.142 { 00:10:44.142 "dma_device_id": "system", 00:10:44.142 "dma_device_type": 1 00:10:44.142 }, 00:10:44.142 { 00:10:44.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.142 "dma_device_type": 2 00:10:44.142 }, 00:10:44.142 { 00:10:44.142 "dma_device_id": "system", 00:10:44.142 "dma_device_type": 1 00:10:44.142 }, 00:10:44.142 { 00:10:44.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.142 "dma_device_type": 2 00:10:44.142 } 00:10:44.142 ], 00:10:44.142 "driver_specific": { 00:10:44.142 "raid": { 00:10:44.142 "uuid": "dd08f2cb-04b0-4d50-a646-93d31690f38c", 00:10:44.142 "strip_size_kb": 64, 00:10:44.142 "state": "online", 00:10:44.142 "raid_level": "concat", 00:10:44.142 "superblock": true, 00:10:44.142 "num_base_bdevs": 3, 00:10:44.142 "num_base_bdevs_discovered": 3, 00:10:44.142 "num_base_bdevs_operational": 3, 00:10:44.142 "base_bdevs_list": [ 00:10:44.142 { 00:10:44.142 "name": "BaseBdev1", 00:10:44.142 "uuid": "9b261cca-30a1-421c-b83c-2a5659a52f1e", 00:10:44.142 "is_configured": true, 00:10:44.142 "data_offset": 2048, 00:10:44.142 "data_size": 63488 00:10:44.142 }, 00:10:44.142 { 00:10:44.142 "name": "BaseBdev2", 00:10:44.142 "uuid": "ca56b25a-9e3f-435e-8208-47d671ad584d", 00:10:44.142 "is_configured": true, 00:10:44.142 "data_offset": 2048, 00:10:44.142 "data_size": 63488 00:10:44.142 }, 00:10:44.142 { 00:10:44.142 "name": "BaseBdev3", 00:10:44.142 "uuid": "0fea4a90-fb01-4b9a-b7fd-49430c22af49", 00:10:44.142 "is_configured": true, 00:10:44.142 "data_offset": 2048, 00:10:44.142 "data_size": 63488 00:10:44.142 } 00:10:44.142 ] 00:10:44.142 } 00:10:44.142 } 00:10:44.142 }' 00:10:44.142 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:44.142 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:44.142 BaseBdev2 00:10:44.142 BaseBdev3' 00:10:44.142 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.401 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:44.401 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.401 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:44.401 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.401 14:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.401 14:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.401 14:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.401 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.401 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.401 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.401 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:44.401 14:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.401 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.401 14:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.402 14:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.402 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.402 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.402 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.402 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.402 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:44.402 14:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.402 14:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.402 14:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.402 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.402 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.402 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:44.402 14:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.402 14:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.402 [2024-11-20 14:21:23.316060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:44.402 [2024-11-20 14:21:23.316093] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:44.402 [2024-11-20 14:21:23.316160] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:44.661 14:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.661 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:44.661 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:44.661 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:44.661 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:44.661 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:44.661 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:44.661 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.661 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:44.661 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.661 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.661 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:44.661 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.661 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.661 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.661 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.661 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.661 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.661 14:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.661 14:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.661 14:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.661 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.661 "name": "Existed_Raid", 00:10:44.661 "uuid": "dd08f2cb-04b0-4d50-a646-93d31690f38c", 00:10:44.661 "strip_size_kb": 64, 00:10:44.661 "state": "offline", 00:10:44.661 "raid_level": "concat", 00:10:44.661 "superblock": true, 00:10:44.661 "num_base_bdevs": 3, 00:10:44.661 "num_base_bdevs_discovered": 2, 00:10:44.661 "num_base_bdevs_operational": 2, 00:10:44.661 "base_bdevs_list": [ 00:10:44.661 { 00:10:44.661 "name": null, 00:10:44.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.661 "is_configured": false, 00:10:44.661 "data_offset": 0, 00:10:44.661 "data_size": 63488 00:10:44.661 }, 00:10:44.661 { 00:10:44.661 "name": "BaseBdev2", 00:10:44.661 "uuid": "ca56b25a-9e3f-435e-8208-47d671ad584d", 00:10:44.661 "is_configured": true, 00:10:44.661 "data_offset": 2048, 00:10:44.661 "data_size": 63488 00:10:44.661 }, 00:10:44.661 { 00:10:44.661 "name": "BaseBdev3", 00:10:44.661 "uuid": "0fea4a90-fb01-4b9a-b7fd-49430c22af49", 00:10:44.661 "is_configured": true, 00:10:44.661 "data_offset": 2048, 00:10:44.661 "data_size": 63488 00:10:44.661 } 00:10:44.661 ] 00:10:44.661 }' 00:10:44.661 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.661 14:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.229 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:45.229 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.229 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.229 14:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:45.229 14:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.229 14:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.229 14:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.229 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:45.229 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:45.229 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:45.229 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.229 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.229 [2024-11-20 14:21:24.023498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:45.229 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.229 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:45.229 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.229 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:45.229 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.229 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.229 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.229 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.229 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:45.229 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:45.229 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:45.229 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.229 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.229 [2024-11-20 14:21:24.170023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:45.229 [2024-11-20 14:21:24.170100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:45.487 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.488 BaseBdev2 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.488 [ 00:10:45.488 { 00:10:45.488 "name": "BaseBdev2", 00:10:45.488 "aliases": [ 00:10:45.488 "8c961cc4-e123-4b5a-9da8-51fdb8f4db85" 00:10:45.488 ], 00:10:45.488 "product_name": "Malloc disk", 00:10:45.488 "block_size": 512, 00:10:45.488 "num_blocks": 65536, 00:10:45.488 "uuid": "8c961cc4-e123-4b5a-9da8-51fdb8f4db85", 00:10:45.488 "assigned_rate_limits": { 00:10:45.488 "rw_ios_per_sec": 0, 00:10:45.488 "rw_mbytes_per_sec": 0, 00:10:45.488 "r_mbytes_per_sec": 0, 00:10:45.488 "w_mbytes_per_sec": 0 00:10:45.488 }, 00:10:45.488 "claimed": false, 00:10:45.488 "zoned": false, 00:10:45.488 "supported_io_types": { 00:10:45.488 "read": true, 00:10:45.488 "write": true, 00:10:45.488 "unmap": true, 00:10:45.488 "flush": true, 00:10:45.488 "reset": true, 00:10:45.488 "nvme_admin": false, 00:10:45.488 "nvme_io": false, 00:10:45.488 "nvme_io_md": false, 00:10:45.488 "write_zeroes": true, 00:10:45.488 "zcopy": true, 00:10:45.488 "get_zone_info": false, 00:10:45.488 "zone_management": false, 00:10:45.488 "zone_append": false, 00:10:45.488 "compare": false, 00:10:45.488 "compare_and_write": false, 00:10:45.488 "abort": true, 00:10:45.488 "seek_hole": false, 00:10:45.488 "seek_data": false, 00:10:45.488 "copy": true, 00:10:45.488 "nvme_iov_md": false 00:10:45.488 }, 00:10:45.488 "memory_domains": [ 00:10:45.488 { 00:10:45.488 "dma_device_id": "system", 00:10:45.488 "dma_device_type": 1 00:10:45.488 }, 00:10:45.488 { 00:10:45.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.488 "dma_device_type": 2 00:10:45.488 } 00:10:45.488 ], 00:10:45.488 "driver_specific": {} 00:10:45.488 } 00:10:45.488 ] 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.488 BaseBdev3 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.488 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.488 [ 00:10:45.488 { 00:10:45.488 "name": "BaseBdev3", 00:10:45.488 "aliases": [ 00:10:45.488 "918b1ce3-17c3-4409-9b98-e658078ae176" 00:10:45.488 ], 00:10:45.488 "product_name": "Malloc disk", 00:10:45.488 "block_size": 512, 00:10:45.488 "num_blocks": 65536, 00:10:45.488 "uuid": "918b1ce3-17c3-4409-9b98-e658078ae176", 00:10:45.488 "assigned_rate_limits": { 00:10:45.488 "rw_ios_per_sec": 0, 00:10:45.488 "rw_mbytes_per_sec": 0, 00:10:45.488 "r_mbytes_per_sec": 0, 00:10:45.488 "w_mbytes_per_sec": 0 00:10:45.488 }, 00:10:45.488 "claimed": false, 00:10:45.488 "zoned": false, 00:10:45.488 "supported_io_types": { 00:10:45.488 "read": true, 00:10:45.488 "write": true, 00:10:45.488 "unmap": true, 00:10:45.488 "flush": true, 00:10:45.488 "reset": true, 00:10:45.488 "nvme_admin": false, 00:10:45.488 "nvme_io": false, 00:10:45.488 "nvme_io_md": false, 00:10:45.488 "write_zeroes": true, 00:10:45.488 "zcopy": true, 00:10:45.488 "get_zone_info": false, 00:10:45.488 "zone_management": false, 00:10:45.488 "zone_append": false, 00:10:45.488 "compare": false, 00:10:45.488 "compare_and_write": false, 00:10:45.488 "abort": true, 00:10:45.747 "seek_hole": false, 00:10:45.747 "seek_data": false, 00:10:45.747 "copy": true, 00:10:45.747 "nvme_iov_md": false 00:10:45.747 }, 00:10:45.747 "memory_domains": [ 00:10:45.747 { 00:10:45.747 "dma_device_id": "system", 00:10:45.747 "dma_device_type": 1 00:10:45.747 }, 00:10:45.747 { 00:10:45.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.747 "dma_device_type": 2 00:10:45.747 } 00:10:45.747 ], 00:10:45.747 "driver_specific": {} 00:10:45.747 } 00:10:45.747 ] 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.747 [2024-11-20 14:21:24.479185] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:45.747 [2024-11-20 14:21:24.479238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:45.747 [2024-11-20 14:21:24.479269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.747 [2024-11-20 14:21:24.482261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.747 "name": "Existed_Raid", 00:10:45.747 "uuid": "6cb55f00-291f-4ff7-94f5-bec5bb282b6a", 00:10:45.747 "strip_size_kb": 64, 00:10:45.747 "state": "configuring", 00:10:45.747 "raid_level": "concat", 00:10:45.747 "superblock": true, 00:10:45.747 "num_base_bdevs": 3, 00:10:45.747 "num_base_bdevs_discovered": 2, 00:10:45.747 "num_base_bdevs_operational": 3, 00:10:45.747 "base_bdevs_list": [ 00:10:45.747 { 00:10:45.747 "name": "BaseBdev1", 00:10:45.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.747 "is_configured": false, 00:10:45.747 "data_offset": 0, 00:10:45.747 "data_size": 0 00:10:45.747 }, 00:10:45.747 { 00:10:45.747 "name": "BaseBdev2", 00:10:45.747 "uuid": "8c961cc4-e123-4b5a-9da8-51fdb8f4db85", 00:10:45.747 "is_configured": true, 00:10:45.747 "data_offset": 2048, 00:10:45.747 "data_size": 63488 00:10:45.747 }, 00:10:45.747 { 00:10:45.747 "name": "BaseBdev3", 00:10:45.747 "uuid": "918b1ce3-17c3-4409-9b98-e658078ae176", 00:10:45.747 "is_configured": true, 00:10:45.747 "data_offset": 2048, 00:10:45.747 "data_size": 63488 00:10:45.747 } 00:10:45.747 ] 00:10:45.747 }' 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.747 14:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.314 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:46.314 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.314 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.314 [2024-11-20 14:21:25.023521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:46.314 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.314 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:46.314 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.314 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.314 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.314 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.314 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.314 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.314 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.314 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.314 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.314 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.314 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.314 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.315 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.315 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.315 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.315 "name": "Existed_Raid", 00:10:46.315 "uuid": "6cb55f00-291f-4ff7-94f5-bec5bb282b6a", 00:10:46.315 "strip_size_kb": 64, 00:10:46.315 "state": "configuring", 00:10:46.315 "raid_level": "concat", 00:10:46.315 "superblock": true, 00:10:46.315 "num_base_bdevs": 3, 00:10:46.315 "num_base_bdevs_discovered": 1, 00:10:46.315 "num_base_bdevs_operational": 3, 00:10:46.315 "base_bdevs_list": [ 00:10:46.315 { 00:10:46.315 "name": "BaseBdev1", 00:10:46.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.315 "is_configured": false, 00:10:46.315 "data_offset": 0, 00:10:46.315 "data_size": 0 00:10:46.315 }, 00:10:46.315 { 00:10:46.315 "name": null, 00:10:46.315 "uuid": "8c961cc4-e123-4b5a-9da8-51fdb8f4db85", 00:10:46.315 "is_configured": false, 00:10:46.315 "data_offset": 0, 00:10:46.315 "data_size": 63488 00:10:46.315 }, 00:10:46.315 { 00:10:46.315 "name": "BaseBdev3", 00:10:46.315 "uuid": "918b1ce3-17c3-4409-9b98-e658078ae176", 00:10:46.315 "is_configured": true, 00:10:46.315 "data_offset": 2048, 00:10:46.315 "data_size": 63488 00:10:46.315 } 00:10:46.315 ] 00:10:46.315 }' 00:10:46.315 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.315 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.882 [2024-11-20 14:21:25.662985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:46.882 BaseBdev1 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.882 [ 00:10:46.882 { 00:10:46.882 "name": "BaseBdev1", 00:10:46.882 "aliases": [ 00:10:46.882 "a9e457c2-66a1-41d6-8ef2-6356edb47d72" 00:10:46.882 ], 00:10:46.882 "product_name": "Malloc disk", 00:10:46.882 "block_size": 512, 00:10:46.882 "num_blocks": 65536, 00:10:46.882 "uuid": "a9e457c2-66a1-41d6-8ef2-6356edb47d72", 00:10:46.882 "assigned_rate_limits": { 00:10:46.882 "rw_ios_per_sec": 0, 00:10:46.882 "rw_mbytes_per_sec": 0, 00:10:46.882 "r_mbytes_per_sec": 0, 00:10:46.882 "w_mbytes_per_sec": 0 00:10:46.882 }, 00:10:46.882 "claimed": true, 00:10:46.882 "claim_type": "exclusive_write", 00:10:46.882 "zoned": false, 00:10:46.882 "supported_io_types": { 00:10:46.882 "read": true, 00:10:46.882 "write": true, 00:10:46.882 "unmap": true, 00:10:46.882 "flush": true, 00:10:46.882 "reset": true, 00:10:46.882 "nvme_admin": false, 00:10:46.882 "nvme_io": false, 00:10:46.882 "nvme_io_md": false, 00:10:46.882 "write_zeroes": true, 00:10:46.882 "zcopy": true, 00:10:46.882 "get_zone_info": false, 00:10:46.882 "zone_management": false, 00:10:46.882 "zone_append": false, 00:10:46.882 "compare": false, 00:10:46.882 "compare_and_write": false, 00:10:46.882 "abort": true, 00:10:46.882 "seek_hole": false, 00:10:46.882 "seek_data": false, 00:10:46.882 "copy": true, 00:10:46.882 "nvme_iov_md": false 00:10:46.882 }, 00:10:46.882 "memory_domains": [ 00:10:46.882 { 00:10:46.882 "dma_device_id": "system", 00:10:46.882 "dma_device_type": 1 00:10:46.882 }, 00:10:46.882 { 00:10:46.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.882 "dma_device_type": 2 00:10:46.882 } 00:10:46.882 ], 00:10:46.882 "driver_specific": {} 00:10:46.882 } 00:10:46.882 ] 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.882 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.883 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.883 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.883 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.883 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.883 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.883 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.883 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.883 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.883 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.883 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.883 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.883 "name": "Existed_Raid", 00:10:46.883 "uuid": "6cb55f00-291f-4ff7-94f5-bec5bb282b6a", 00:10:46.883 "strip_size_kb": 64, 00:10:46.883 "state": "configuring", 00:10:46.883 "raid_level": "concat", 00:10:46.883 "superblock": true, 00:10:46.883 "num_base_bdevs": 3, 00:10:46.883 "num_base_bdevs_discovered": 2, 00:10:46.883 "num_base_bdevs_operational": 3, 00:10:46.883 "base_bdevs_list": [ 00:10:46.883 { 00:10:46.883 "name": "BaseBdev1", 00:10:46.883 "uuid": "a9e457c2-66a1-41d6-8ef2-6356edb47d72", 00:10:46.883 "is_configured": true, 00:10:46.883 "data_offset": 2048, 00:10:46.883 "data_size": 63488 00:10:46.883 }, 00:10:46.883 { 00:10:46.883 "name": null, 00:10:46.883 "uuid": "8c961cc4-e123-4b5a-9da8-51fdb8f4db85", 00:10:46.883 "is_configured": false, 00:10:46.883 "data_offset": 0, 00:10:46.883 "data_size": 63488 00:10:46.883 }, 00:10:46.883 { 00:10:46.883 "name": "BaseBdev3", 00:10:46.883 "uuid": "918b1ce3-17c3-4409-9b98-e658078ae176", 00:10:46.883 "is_configured": true, 00:10:46.883 "data_offset": 2048, 00:10:46.883 "data_size": 63488 00:10:46.883 } 00:10:46.883 ] 00:10:46.883 }' 00:10:46.883 14:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.883 14:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.448 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.448 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:47.448 14:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.448 14:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.448 14:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.448 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:47.448 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:47.449 14:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.449 14:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.449 [2024-11-20 14:21:26.299243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:47.449 14:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.449 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:47.449 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.449 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.449 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.449 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.449 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.449 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.449 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.449 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.449 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.449 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.449 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.449 14:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.449 14:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.449 14:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.449 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.449 "name": "Existed_Raid", 00:10:47.449 "uuid": "6cb55f00-291f-4ff7-94f5-bec5bb282b6a", 00:10:47.449 "strip_size_kb": 64, 00:10:47.449 "state": "configuring", 00:10:47.449 "raid_level": "concat", 00:10:47.449 "superblock": true, 00:10:47.449 "num_base_bdevs": 3, 00:10:47.449 "num_base_bdevs_discovered": 1, 00:10:47.449 "num_base_bdevs_operational": 3, 00:10:47.449 "base_bdevs_list": [ 00:10:47.449 { 00:10:47.449 "name": "BaseBdev1", 00:10:47.449 "uuid": "a9e457c2-66a1-41d6-8ef2-6356edb47d72", 00:10:47.449 "is_configured": true, 00:10:47.449 "data_offset": 2048, 00:10:47.449 "data_size": 63488 00:10:47.449 }, 00:10:47.449 { 00:10:47.449 "name": null, 00:10:47.449 "uuid": "8c961cc4-e123-4b5a-9da8-51fdb8f4db85", 00:10:47.449 "is_configured": false, 00:10:47.449 "data_offset": 0, 00:10:47.449 "data_size": 63488 00:10:47.449 }, 00:10:47.449 { 00:10:47.449 "name": null, 00:10:47.449 "uuid": "918b1ce3-17c3-4409-9b98-e658078ae176", 00:10:47.449 "is_configured": false, 00:10:47.449 "data_offset": 0, 00:10:47.449 "data_size": 63488 00:10:47.449 } 00:10:47.449 ] 00:10:47.449 }' 00:10:47.449 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.449 14:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.032 [2024-11-20 14:21:26.871450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.032 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.032 "name": "Existed_Raid", 00:10:48.032 "uuid": "6cb55f00-291f-4ff7-94f5-bec5bb282b6a", 00:10:48.032 "strip_size_kb": 64, 00:10:48.032 "state": "configuring", 00:10:48.032 "raid_level": "concat", 00:10:48.032 "superblock": true, 00:10:48.032 "num_base_bdevs": 3, 00:10:48.032 "num_base_bdevs_discovered": 2, 00:10:48.032 "num_base_bdevs_operational": 3, 00:10:48.032 "base_bdevs_list": [ 00:10:48.032 { 00:10:48.032 "name": "BaseBdev1", 00:10:48.032 "uuid": "a9e457c2-66a1-41d6-8ef2-6356edb47d72", 00:10:48.033 "is_configured": true, 00:10:48.033 "data_offset": 2048, 00:10:48.033 "data_size": 63488 00:10:48.033 }, 00:10:48.033 { 00:10:48.033 "name": null, 00:10:48.033 "uuid": "8c961cc4-e123-4b5a-9da8-51fdb8f4db85", 00:10:48.033 "is_configured": false, 00:10:48.033 "data_offset": 0, 00:10:48.033 "data_size": 63488 00:10:48.033 }, 00:10:48.033 { 00:10:48.033 "name": "BaseBdev3", 00:10:48.033 "uuid": "918b1ce3-17c3-4409-9b98-e658078ae176", 00:10:48.033 "is_configured": true, 00:10:48.033 "data_offset": 2048, 00:10:48.033 "data_size": 63488 00:10:48.033 } 00:10:48.033 ] 00:10:48.033 }' 00:10:48.033 14:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.033 14:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.620 14:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.620 14:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.620 14:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.620 14:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:48.620 14:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.620 14:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:48.620 14:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:48.620 14:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.620 14:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.620 [2024-11-20 14:21:27.447614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:48.620 14:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.620 14:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:48.620 14:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.620 14:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.620 14:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.620 14:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.620 14:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.620 14:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.621 14:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.621 14:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.621 14:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.621 14:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.621 14:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.621 14:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.621 14:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.621 14:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.621 14:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.621 "name": "Existed_Raid", 00:10:48.621 "uuid": "6cb55f00-291f-4ff7-94f5-bec5bb282b6a", 00:10:48.621 "strip_size_kb": 64, 00:10:48.621 "state": "configuring", 00:10:48.621 "raid_level": "concat", 00:10:48.621 "superblock": true, 00:10:48.621 "num_base_bdevs": 3, 00:10:48.621 "num_base_bdevs_discovered": 1, 00:10:48.621 "num_base_bdevs_operational": 3, 00:10:48.621 "base_bdevs_list": [ 00:10:48.621 { 00:10:48.621 "name": null, 00:10:48.621 "uuid": "a9e457c2-66a1-41d6-8ef2-6356edb47d72", 00:10:48.621 "is_configured": false, 00:10:48.621 "data_offset": 0, 00:10:48.621 "data_size": 63488 00:10:48.621 }, 00:10:48.621 { 00:10:48.621 "name": null, 00:10:48.621 "uuid": "8c961cc4-e123-4b5a-9da8-51fdb8f4db85", 00:10:48.621 "is_configured": false, 00:10:48.621 "data_offset": 0, 00:10:48.621 "data_size": 63488 00:10:48.621 }, 00:10:48.621 { 00:10:48.621 "name": "BaseBdev3", 00:10:48.621 "uuid": "918b1ce3-17c3-4409-9b98-e658078ae176", 00:10:48.621 "is_configured": true, 00:10:48.621 "data_offset": 2048, 00:10:48.621 "data_size": 63488 00:10:48.621 } 00:10:48.621 ] 00:10:48.621 }' 00:10:48.621 14:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.621 14:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.188 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.188 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:49.188 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.188 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.189 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.189 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:49.189 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:49.189 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.189 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.189 [2024-11-20 14:21:28.104647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.189 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.189 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:49.189 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.189 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.189 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.189 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.189 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.189 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.189 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.189 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.189 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.189 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.189 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.189 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.189 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.189 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.189 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.189 "name": "Existed_Raid", 00:10:49.189 "uuid": "6cb55f00-291f-4ff7-94f5-bec5bb282b6a", 00:10:49.189 "strip_size_kb": 64, 00:10:49.189 "state": "configuring", 00:10:49.189 "raid_level": "concat", 00:10:49.189 "superblock": true, 00:10:49.189 "num_base_bdevs": 3, 00:10:49.189 "num_base_bdevs_discovered": 2, 00:10:49.189 "num_base_bdevs_operational": 3, 00:10:49.189 "base_bdevs_list": [ 00:10:49.189 { 00:10:49.189 "name": null, 00:10:49.189 "uuid": "a9e457c2-66a1-41d6-8ef2-6356edb47d72", 00:10:49.189 "is_configured": false, 00:10:49.189 "data_offset": 0, 00:10:49.189 "data_size": 63488 00:10:49.189 }, 00:10:49.189 { 00:10:49.189 "name": "BaseBdev2", 00:10:49.189 "uuid": "8c961cc4-e123-4b5a-9da8-51fdb8f4db85", 00:10:49.189 "is_configured": true, 00:10:49.189 "data_offset": 2048, 00:10:49.189 "data_size": 63488 00:10:49.189 }, 00:10:49.189 { 00:10:49.189 "name": "BaseBdev3", 00:10:49.189 "uuid": "918b1ce3-17c3-4409-9b98-e658078ae176", 00:10:49.189 "is_configured": true, 00:10:49.189 "data_offset": 2048, 00:10:49.189 "data_size": 63488 00:10:49.189 } 00:10:49.189 ] 00:10:49.189 }' 00:10:49.189 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.189 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.757 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:49.757 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.757 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.757 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.757 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.757 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:49.757 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.757 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:49.757 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.757 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.757 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.015 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a9e457c2-66a1-41d6-8ef2-6356edb47d72 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.016 [2024-11-20 14:21:28.779599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:50.016 [2024-11-20 14:21:28.779865] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:50.016 [2024-11-20 14:21:28.779889] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:50.016 NewBaseBdev 00:10:50.016 [2024-11-20 14:21:28.780259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:50.016 [2024-11-20 14:21:28.780451] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:50.016 [2024-11-20 14:21:28.780468] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:50.016 [2024-11-20 14:21:28.780634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.016 [ 00:10:50.016 { 00:10:50.016 "name": "NewBaseBdev", 00:10:50.016 "aliases": [ 00:10:50.016 "a9e457c2-66a1-41d6-8ef2-6356edb47d72" 00:10:50.016 ], 00:10:50.016 "product_name": "Malloc disk", 00:10:50.016 "block_size": 512, 00:10:50.016 "num_blocks": 65536, 00:10:50.016 "uuid": "a9e457c2-66a1-41d6-8ef2-6356edb47d72", 00:10:50.016 "assigned_rate_limits": { 00:10:50.016 "rw_ios_per_sec": 0, 00:10:50.016 "rw_mbytes_per_sec": 0, 00:10:50.016 "r_mbytes_per_sec": 0, 00:10:50.016 "w_mbytes_per_sec": 0 00:10:50.016 }, 00:10:50.016 "claimed": true, 00:10:50.016 "claim_type": "exclusive_write", 00:10:50.016 "zoned": false, 00:10:50.016 "supported_io_types": { 00:10:50.016 "read": true, 00:10:50.016 "write": true, 00:10:50.016 "unmap": true, 00:10:50.016 "flush": true, 00:10:50.016 "reset": true, 00:10:50.016 "nvme_admin": false, 00:10:50.016 "nvme_io": false, 00:10:50.016 "nvme_io_md": false, 00:10:50.016 "write_zeroes": true, 00:10:50.016 "zcopy": true, 00:10:50.016 "get_zone_info": false, 00:10:50.016 "zone_management": false, 00:10:50.016 "zone_append": false, 00:10:50.016 "compare": false, 00:10:50.016 "compare_and_write": false, 00:10:50.016 "abort": true, 00:10:50.016 "seek_hole": false, 00:10:50.016 "seek_data": false, 00:10:50.016 "copy": true, 00:10:50.016 "nvme_iov_md": false 00:10:50.016 }, 00:10:50.016 "memory_domains": [ 00:10:50.016 { 00:10:50.016 "dma_device_id": "system", 00:10:50.016 "dma_device_type": 1 00:10:50.016 }, 00:10:50.016 { 00:10:50.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.016 "dma_device_type": 2 00:10:50.016 } 00:10:50.016 ], 00:10:50.016 "driver_specific": {} 00:10:50.016 } 00:10:50.016 ] 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.016 "name": "Existed_Raid", 00:10:50.016 "uuid": "6cb55f00-291f-4ff7-94f5-bec5bb282b6a", 00:10:50.016 "strip_size_kb": 64, 00:10:50.016 "state": "online", 00:10:50.016 "raid_level": "concat", 00:10:50.016 "superblock": true, 00:10:50.016 "num_base_bdevs": 3, 00:10:50.016 "num_base_bdevs_discovered": 3, 00:10:50.016 "num_base_bdevs_operational": 3, 00:10:50.016 "base_bdevs_list": [ 00:10:50.016 { 00:10:50.016 "name": "NewBaseBdev", 00:10:50.016 "uuid": "a9e457c2-66a1-41d6-8ef2-6356edb47d72", 00:10:50.016 "is_configured": true, 00:10:50.016 "data_offset": 2048, 00:10:50.016 "data_size": 63488 00:10:50.016 }, 00:10:50.016 { 00:10:50.016 "name": "BaseBdev2", 00:10:50.016 "uuid": "8c961cc4-e123-4b5a-9da8-51fdb8f4db85", 00:10:50.016 "is_configured": true, 00:10:50.016 "data_offset": 2048, 00:10:50.016 "data_size": 63488 00:10:50.016 }, 00:10:50.016 { 00:10:50.016 "name": "BaseBdev3", 00:10:50.016 "uuid": "918b1ce3-17c3-4409-9b98-e658078ae176", 00:10:50.016 "is_configured": true, 00:10:50.016 "data_offset": 2048, 00:10:50.016 "data_size": 63488 00:10:50.016 } 00:10:50.016 ] 00:10:50.016 }' 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.016 14:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.583 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:50.583 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:50.583 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.584 [2024-11-20 14:21:29.336250] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:50.584 "name": "Existed_Raid", 00:10:50.584 "aliases": [ 00:10:50.584 "6cb55f00-291f-4ff7-94f5-bec5bb282b6a" 00:10:50.584 ], 00:10:50.584 "product_name": "Raid Volume", 00:10:50.584 "block_size": 512, 00:10:50.584 "num_blocks": 190464, 00:10:50.584 "uuid": "6cb55f00-291f-4ff7-94f5-bec5bb282b6a", 00:10:50.584 "assigned_rate_limits": { 00:10:50.584 "rw_ios_per_sec": 0, 00:10:50.584 "rw_mbytes_per_sec": 0, 00:10:50.584 "r_mbytes_per_sec": 0, 00:10:50.584 "w_mbytes_per_sec": 0 00:10:50.584 }, 00:10:50.584 "claimed": false, 00:10:50.584 "zoned": false, 00:10:50.584 "supported_io_types": { 00:10:50.584 "read": true, 00:10:50.584 "write": true, 00:10:50.584 "unmap": true, 00:10:50.584 "flush": true, 00:10:50.584 "reset": true, 00:10:50.584 "nvme_admin": false, 00:10:50.584 "nvme_io": false, 00:10:50.584 "nvme_io_md": false, 00:10:50.584 "write_zeroes": true, 00:10:50.584 "zcopy": false, 00:10:50.584 "get_zone_info": false, 00:10:50.584 "zone_management": false, 00:10:50.584 "zone_append": false, 00:10:50.584 "compare": false, 00:10:50.584 "compare_and_write": false, 00:10:50.584 "abort": false, 00:10:50.584 "seek_hole": false, 00:10:50.584 "seek_data": false, 00:10:50.584 "copy": false, 00:10:50.584 "nvme_iov_md": false 00:10:50.584 }, 00:10:50.584 "memory_domains": [ 00:10:50.584 { 00:10:50.584 "dma_device_id": "system", 00:10:50.584 "dma_device_type": 1 00:10:50.584 }, 00:10:50.584 { 00:10:50.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.584 "dma_device_type": 2 00:10:50.584 }, 00:10:50.584 { 00:10:50.584 "dma_device_id": "system", 00:10:50.584 "dma_device_type": 1 00:10:50.584 }, 00:10:50.584 { 00:10:50.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.584 "dma_device_type": 2 00:10:50.584 }, 00:10:50.584 { 00:10:50.584 "dma_device_id": "system", 00:10:50.584 "dma_device_type": 1 00:10:50.584 }, 00:10:50.584 { 00:10:50.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.584 "dma_device_type": 2 00:10:50.584 } 00:10:50.584 ], 00:10:50.584 "driver_specific": { 00:10:50.584 "raid": { 00:10:50.584 "uuid": "6cb55f00-291f-4ff7-94f5-bec5bb282b6a", 00:10:50.584 "strip_size_kb": 64, 00:10:50.584 "state": "online", 00:10:50.584 "raid_level": "concat", 00:10:50.584 "superblock": true, 00:10:50.584 "num_base_bdevs": 3, 00:10:50.584 "num_base_bdevs_discovered": 3, 00:10:50.584 "num_base_bdevs_operational": 3, 00:10:50.584 "base_bdevs_list": [ 00:10:50.584 { 00:10:50.584 "name": "NewBaseBdev", 00:10:50.584 "uuid": "a9e457c2-66a1-41d6-8ef2-6356edb47d72", 00:10:50.584 "is_configured": true, 00:10:50.584 "data_offset": 2048, 00:10:50.584 "data_size": 63488 00:10:50.584 }, 00:10:50.584 { 00:10:50.584 "name": "BaseBdev2", 00:10:50.584 "uuid": "8c961cc4-e123-4b5a-9da8-51fdb8f4db85", 00:10:50.584 "is_configured": true, 00:10:50.584 "data_offset": 2048, 00:10:50.584 "data_size": 63488 00:10:50.584 }, 00:10:50.584 { 00:10:50.584 "name": "BaseBdev3", 00:10:50.584 "uuid": "918b1ce3-17c3-4409-9b98-e658078ae176", 00:10:50.584 "is_configured": true, 00:10:50.584 "data_offset": 2048, 00:10:50.584 "data_size": 63488 00:10:50.584 } 00:10:50.584 ] 00:10:50.584 } 00:10:50.584 } 00:10:50.584 }' 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:50.584 BaseBdev2 00:10:50.584 BaseBdev3' 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.584 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.843 [2024-11-20 14:21:29.647869] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:50.843 [2024-11-20 14:21:29.647900] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.843 [2024-11-20 14:21:29.647995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.843 [2024-11-20 14:21:29.648088] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:50.843 [2024-11-20 14:21:29.648120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66267 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66267 ']' 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66267 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66267 00:10:50.843 killing process with pid 66267 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66267' 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66267 00:10:50.843 [2024-11-20 14:21:29.683870] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:50.843 14:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66267 00:10:51.102 [2024-11-20 14:21:29.944595] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:52.039 14:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:52.039 ************************************ 00:10:52.039 END TEST raid_state_function_test_sb 00:10:52.039 ************************************ 00:10:52.039 00:10:52.039 real 0m11.914s 00:10:52.039 user 0m19.879s 00:10:52.039 sys 0m1.573s 00:10:52.039 14:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.039 14:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.298 14:21:31 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:10:52.298 14:21:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:52.298 14:21:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.298 14:21:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:52.298 ************************************ 00:10:52.298 START TEST raid_superblock_test 00:10:52.298 ************************************ 00:10:52.298 14:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:10:52.298 14:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:52.298 14:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:52.298 14:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:52.298 14:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:52.298 14:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:52.298 14:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:52.298 14:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:52.298 14:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:52.298 14:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:52.298 14:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:52.298 14:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:52.298 14:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:52.298 14:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:52.298 14:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:52.298 14:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:52.298 14:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:52.298 14:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66904 00:10:52.298 14:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:52.298 14:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66904 00:10:52.298 14:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66904 ']' 00:10:52.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.299 14:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.299 14:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.299 14:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.299 14:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.299 14:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.299 [2024-11-20 14:21:31.159643] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:10:52.299 [2024-11-20 14:21:31.160006] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66904 ] 00:10:52.557 [2024-11-20 14:21:31.343086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.557 [2024-11-20 14:21:31.465547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.816 [2024-11-20 14:21:31.669228] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.816 [2024-11-20 14:21:31.669283] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.384 malloc1 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.384 [2024-11-20 14:21:32.179796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:53.384 [2024-11-20 14:21:32.179883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.384 [2024-11-20 14:21:32.179915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:53.384 [2024-11-20 14:21:32.179932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.384 [2024-11-20 14:21:32.182790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.384 [2024-11-20 14:21:32.182963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:53.384 pt1 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.384 malloc2 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.384 [2024-11-20 14:21:32.234858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:53.384 [2024-11-20 14:21:32.235063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.384 [2024-11-20 14:21:32.235126] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:53.384 [2024-11-20 14:21:32.235144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.384 [2024-11-20 14:21:32.237907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.384 [2024-11-20 14:21:32.237952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:53.384 pt2 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.384 malloc3 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.384 [2024-11-20 14:21:32.304131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:53.384 [2024-11-20 14:21:32.304197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.384 [2024-11-20 14:21:32.304231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:53.384 [2024-11-20 14:21:32.304248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.384 [2024-11-20 14:21:32.307019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.384 [2024-11-20 14:21:32.307227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:53.384 pt3 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.384 [2024-11-20 14:21:32.316218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:53.384 [2024-11-20 14:21:32.318675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:53.384 [2024-11-20 14:21:32.318763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:53.384 [2024-11-20 14:21:32.318947] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:53.384 [2024-11-20 14:21:32.318969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:53.384 [2024-11-20 14:21:32.319359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:53.384 [2024-11-20 14:21:32.319566] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:53.384 [2024-11-20 14:21:32.319587] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:53.384 [2024-11-20 14:21:32.319781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.384 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.385 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.385 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.385 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.385 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.385 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.385 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.643 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.643 "name": "raid_bdev1", 00:10:53.643 "uuid": "ee4a6fa1-9873-412e-91b2-f9d934e13e2e", 00:10:53.643 "strip_size_kb": 64, 00:10:53.643 "state": "online", 00:10:53.643 "raid_level": "concat", 00:10:53.643 "superblock": true, 00:10:53.643 "num_base_bdevs": 3, 00:10:53.643 "num_base_bdevs_discovered": 3, 00:10:53.643 "num_base_bdevs_operational": 3, 00:10:53.643 "base_bdevs_list": [ 00:10:53.643 { 00:10:53.643 "name": "pt1", 00:10:53.643 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:53.643 "is_configured": true, 00:10:53.643 "data_offset": 2048, 00:10:53.643 "data_size": 63488 00:10:53.643 }, 00:10:53.643 { 00:10:53.643 "name": "pt2", 00:10:53.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.643 "is_configured": true, 00:10:53.643 "data_offset": 2048, 00:10:53.643 "data_size": 63488 00:10:53.643 }, 00:10:53.643 { 00:10:53.643 "name": "pt3", 00:10:53.643 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:53.643 "is_configured": true, 00:10:53.643 "data_offset": 2048, 00:10:53.643 "data_size": 63488 00:10:53.643 } 00:10:53.643 ] 00:10:53.643 }' 00:10:53.643 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.643 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.902 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:53.902 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:53.902 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:53.902 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:53.902 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:53.902 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:53.902 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:53.902 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.902 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.902 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:53.902 [2024-11-20 14:21:32.820721] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.902 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.902 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:53.902 "name": "raid_bdev1", 00:10:53.902 "aliases": [ 00:10:53.902 "ee4a6fa1-9873-412e-91b2-f9d934e13e2e" 00:10:53.902 ], 00:10:53.902 "product_name": "Raid Volume", 00:10:53.902 "block_size": 512, 00:10:53.902 "num_blocks": 190464, 00:10:53.902 "uuid": "ee4a6fa1-9873-412e-91b2-f9d934e13e2e", 00:10:53.902 "assigned_rate_limits": { 00:10:53.902 "rw_ios_per_sec": 0, 00:10:53.902 "rw_mbytes_per_sec": 0, 00:10:53.902 "r_mbytes_per_sec": 0, 00:10:53.902 "w_mbytes_per_sec": 0 00:10:53.902 }, 00:10:53.902 "claimed": false, 00:10:53.902 "zoned": false, 00:10:53.902 "supported_io_types": { 00:10:53.903 "read": true, 00:10:53.903 "write": true, 00:10:53.903 "unmap": true, 00:10:53.903 "flush": true, 00:10:53.903 "reset": true, 00:10:53.903 "nvme_admin": false, 00:10:53.903 "nvme_io": false, 00:10:53.903 "nvme_io_md": false, 00:10:53.903 "write_zeroes": true, 00:10:53.903 "zcopy": false, 00:10:53.903 "get_zone_info": false, 00:10:53.903 "zone_management": false, 00:10:53.903 "zone_append": false, 00:10:53.903 "compare": false, 00:10:53.903 "compare_and_write": false, 00:10:53.903 "abort": false, 00:10:53.903 "seek_hole": false, 00:10:53.903 "seek_data": false, 00:10:53.903 "copy": false, 00:10:53.903 "nvme_iov_md": false 00:10:53.903 }, 00:10:53.903 "memory_domains": [ 00:10:53.903 { 00:10:53.903 "dma_device_id": "system", 00:10:53.903 "dma_device_type": 1 00:10:53.903 }, 00:10:53.903 { 00:10:53.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.903 "dma_device_type": 2 00:10:53.903 }, 00:10:53.903 { 00:10:53.903 "dma_device_id": "system", 00:10:53.903 "dma_device_type": 1 00:10:53.903 }, 00:10:53.903 { 00:10:53.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.903 "dma_device_type": 2 00:10:53.903 }, 00:10:53.903 { 00:10:53.903 "dma_device_id": "system", 00:10:53.903 "dma_device_type": 1 00:10:53.903 }, 00:10:53.903 { 00:10:53.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.903 "dma_device_type": 2 00:10:53.903 } 00:10:53.903 ], 00:10:53.903 "driver_specific": { 00:10:53.903 "raid": { 00:10:53.903 "uuid": "ee4a6fa1-9873-412e-91b2-f9d934e13e2e", 00:10:53.903 "strip_size_kb": 64, 00:10:53.903 "state": "online", 00:10:53.903 "raid_level": "concat", 00:10:53.903 "superblock": true, 00:10:53.903 "num_base_bdevs": 3, 00:10:53.903 "num_base_bdevs_discovered": 3, 00:10:53.903 "num_base_bdevs_operational": 3, 00:10:53.903 "base_bdevs_list": [ 00:10:53.903 { 00:10:53.903 "name": "pt1", 00:10:53.903 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:53.903 "is_configured": true, 00:10:53.903 "data_offset": 2048, 00:10:53.903 "data_size": 63488 00:10:53.903 }, 00:10:53.903 { 00:10:53.903 "name": "pt2", 00:10:53.903 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.903 "is_configured": true, 00:10:53.903 "data_offset": 2048, 00:10:53.903 "data_size": 63488 00:10:53.903 }, 00:10:53.903 { 00:10:53.903 "name": "pt3", 00:10:53.903 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:53.903 "is_configured": true, 00:10:53.903 "data_offset": 2048, 00:10:53.903 "data_size": 63488 00:10:53.903 } 00:10:53.903 ] 00:10:53.903 } 00:10:53.903 } 00:10:53.903 }' 00:10:53.903 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:54.162 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:54.162 pt2 00:10:54.162 pt3' 00:10:54.162 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.162 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:54.162 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.162 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:54.162 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.162 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.162 14:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.162 14:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.162 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.162 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.162 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.162 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:54.162 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.162 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.162 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.162 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.162 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.162 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.162 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.162 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.162 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:54.162 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.162 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.162 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.162 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.162 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.162 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:54.162 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:54.162 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.162 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.162 [2024-11-20 14:21:33.136732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:54.421 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.421 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ee4a6fa1-9873-412e-91b2-f9d934e13e2e 00:10:54.421 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ee4a6fa1-9873-412e-91b2-f9d934e13e2e ']' 00:10:54.421 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:54.421 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.421 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.422 [2024-11-20 14:21:33.184446] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:54.422 [2024-11-20 14:21:33.184491] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:54.422 [2024-11-20 14:21:33.184568] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.422 [2024-11-20 14:21:33.184660] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:54.422 [2024-11-20 14:21:33.184676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.422 [2024-11-20 14:21:33.340588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:54.422 [2024-11-20 14:21:33.343049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:54.422 [2024-11-20 14:21:33.343133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:54.422 [2024-11-20 14:21:33.343202] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:54.422 [2024-11-20 14:21:33.343270] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:54.422 [2024-11-20 14:21:33.343303] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:54.422 [2024-11-20 14:21:33.343330] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:54.422 [2024-11-20 14:21:33.343343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:54.422 request: 00:10:54.422 { 00:10:54.422 "name": "raid_bdev1", 00:10:54.422 "raid_level": "concat", 00:10:54.422 "base_bdevs": [ 00:10:54.422 "malloc1", 00:10:54.422 "malloc2", 00:10:54.422 "malloc3" 00:10:54.422 ], 00:10:54.422 "strip_size_kb": 64, 00:10:54.422 "superblock": false, 00:10:54.422 "method": "bdev_raid_create", 00:10:54.422 "req_id": 1 00:10:54.422 } 00:10:54.422 Got JSON-RPC error response 00:10:54.422 response: 00:10:54.422 { 00:10:54.422 "code": -17, 00:10:54.422 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:54.422 } 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.422 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.681 [2024-11-20 14:21:33.404534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:54.681 [2024-11-20 14:21:33.404759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.681 [2024-11-20 14:21:33.404800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:54.681 [2024-11-20 14:21:33.404817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.681 [2024-11-20 14:21:33.407713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.681 [2024-11-20 14:21:33.407771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:54.681 [2024-11-20 14:21:33.407887] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:54.681 [2024-11-20 14:21:33.407953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:54.681 pt1 00:10:54.681 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.681 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:54.681 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:54.681 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.681 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.681 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.681 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.681 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.681 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.681 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.682 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.682 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.682 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.682 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.682 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.682 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.682 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.682 "name": "raid_bdev1", 00:10:54.682 "uuid": "ee4a6fa1-9873-412e-91b2-f9d934e13e2e", 00:10:54.682 "strip_size_kb": 64, 00:10:54.682 "state": "configuring", 00:10:54.682 "raid_level": "concat", 00:10:54.682 "superblock": true, 00:10:54.682 "num_base_bdevs": 3, 00:10:54.682 "num_base_bdevs_discovered": 1, 00:10:54.682 "num_base_bdevs_operational": 3, 00:10:54.682 "base_bdevs_list": [ 00:10:54.682 { 00:10:54.682 "name": "pt1", 00:10:54.682 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:54.682 "is_configured": true, 00:10:54.682 "data_offset": 2048, 00:10:54.682 "data_size": 63488 00:10:54.682 }, 00:10:54.682 { 00:10:54.682 "name": null, 00:10:54.682 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:54.682 "is_configured": false, 00:10:54.682 "data_offset": 2048, 00:10:54.682 "data_size": 63488 00:10:54.682 }, 00:10:54.682 { 00:10:54.682 "name": null, 00:10:54.682 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:54.682 "is_configured": false, 00:10:54.682 "data_offset": 2048, 00:10:54.682 "data_size": 63488 00:10:54.682 } 00:10:54.682 ] 00:10:54.682 }' 00:10:54.682 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.682 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.940 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:54.940 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:54.940 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.940 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.940 [2024-11-20 14:21:33.916748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:54.940 [2024-11-20 14:21:33.916966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.940 [2024-11-20 14:21:33.917143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:54.940 [2024-11-20 14:21:33.917279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.940 [2024-11-20 14:21:33.917863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.940 [2024-11-20 14:21:33.918022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:54.940 [2024-11-20 14:21:33.918245] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:54.940 [2024-11-20 14:21:33.918294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:54.940 pt2 00:10:55.199 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.199 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:55.199 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.199 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.199 [2024-11-20 14:21:33.924730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:55.199 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.199 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:55.199 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.199 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.199 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.199 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.199 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.199 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.199 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.199 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.199 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.199 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.199 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.199 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.199 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.199 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.199 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.199 "name": "raid_bdev1", 00:10:55.199 "uuid": "ee4a6fa1-9873-412e-91b2-f9d934e13e2e", 00:10:55.199 "strip_size_kb": 64, 00:10:55.199 "state": "configuring", 00:10:55.199 "raid_level": "concat", 00:10:55.199 "superblock": true, 00:10:55.199 "num_base_bdevs": 3, 00:10:55.199 "num_base_bdevs_discovered": 1, 00:10:55.199 "num_base_bdevs_operational": 3, 00:10:55.199 "base_bdevs_list": [ 00:10:55.199 { 00:10:55.199 "name": "pt1", 00:10:55.199 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:55.199 "is_configured": true, 00:10:55.199 "data_offset": 2048, 00:10:55.199 "data_size": 63488 00:10:55.199 }, 00:10:55.199 { 00:10:55.199 "name": null, 00:10:55.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:55.199 "is_configured": false, 00:10:55.199 "data_offset": 0, 00:10:55.199 "data_size": 63488 00:10:55.199 }, 00:10:55.199 { 00:10:55.199 "name": null, 00:10:55.199 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:55.199 "is_configured": false, 00:10:55.199 "data_offset": 2048, 00:10:55.199 "data_size": 63488 00:10:55.199 } 00:10:55.199 ] 00:10:55.199 }' 00:10:55.199 14:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.199 14:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.767 [2024-11-20 14:21:34.468921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:55.767 [2024-11-20 14:21:34.469048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.767 [2024-11-20 14:21:34.469080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:55.767 [2024-11-20 14:21:34.469099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.767 [2024-11-20 14:21:34.469655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.767 [2024-11-20 14:21:34.469702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:55.767 [2024-11-20 14:21:34.469840] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:55.767 [2024-11-20 14:21:34.469875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:55.767 pt2 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.767 [2024-11-20 14:21:34.480907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:55.767 [2024-11-20 14:21:34.480991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.767 [2024-11-20 14:21:34.481046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:55.767 [2024-11-20 14:21:34.481068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.767 [2024-11-20 14:21:34.481492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.767 [2024-11-20 14:21:34.481542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:55.767 [2024-11-20 14:21:34.481617] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:55.767 [2024-11-20 14:21:34.481649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:55.767 [2024-11-20 14:21:34.481807] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:55.767 [2024-11-20 14:21:34.481840] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:55.767 [2024-11-20 14:21:34.482189] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:55.767 [2024-11-20 14:21:34.482381] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:55.767 [2024-11-20 14:21:34.482396] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:55.767 [2024-11-20 14:21:34.482562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.767 pt3 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.767 "name": "raid_bdev1", 00:10:55.767 "uuid": "ee4a6fa1-9873-412e-91b2-f9d934e13e2e", 00:10:55.767 "strip_size_kb": 64, 00:10:55.767 "state": "online", 00:10:55.767 "raid_level": "concat", 00:10:55.767 "superblock": true, 00:10:55.767 "num_base_bdevs": 3, 00:10:55.767 "num_base_bdevs_discovered": 3, 00:10:55.767 "num_base_bdevs_operational": 3, 00:10:55.767 "base_bdevs_list": [ 00:10:55.767 { 00:10:55.767 "name": "pt1", 00:10:55.767 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:55.767 "is_configured": true, 00:10:55.767 "data_offset": 2048, 00:10:55.767 "data_size": 63488 00:10:55.767 }, 00:10:55.767 { 00:10:55.767 "name": "pt2", 00:10:55.767 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:55.767 "is_configured": true, 00:10:55.767 "data_offset": 2048, 00:10:55.767 "data_size": 63488 00:10:55.767 }, 00:10:55.767 { 00:10:55.767 "name": "pt3", 00:10:55.767 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:55.767 "is_configured": true, 00:10:55.767 "data_offset": 2048, 00:10:55.767 "data_size": 63488 00:10:55.767 } 00:10:55.767 ] 00:10:55.767 }' 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.767 14:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.052 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:56.052 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:56.052 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:56.052 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:56.052 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:56.052 14:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:56.052 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:56.052 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:56.052 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.052 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.312 [2024-11-20 14:21:35.009544] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.312 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.312 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:56.312 "name": "raid_bdev1", 00:10:56.312 "aliases": [ 00:10:56.312 "ee4a6fa1-9873-412e-91b2-f9d934e13e2e" 00:10:56.312 ], 00:10:56.312 "product_name": "Raid Volume", 00:10:56.312 "block_size": 512, 00:10:56.312 "num_blocks": 190464, 00:10:56.312 "uuid": "ee4a6fa1-9873-412e-91b2-f9d934e13e2e", 00:10:56.312 "assigned_rate_limits": { 00:10:56.312 "rw_ios_per_sec": 0, 00:10:56.312 "rw_mbytes_per_sec": 0, 00:10:56.312 "r_mbytes_per_sec": 0, 00:10:56.312 "w_mbytes_per_sec": 0 00:10:56.312 }, 00:10:56.312 "claimed": false, 00:10:56.312 "zoned": false, 00:10:56.312 "supported_io_types": { 00:10:56.312 "read": true, 00:10:56.312 "write": true, 00:10:56.312 "unmap": true, 00:10:56.312 "flush": true, 00:10:56.312 "reset": true, 00:10:56.312 "nvme_admin": false, 00:10:56.312 "nvme_io": false, 00:10:56.312 "nvme_io_md": false, 00:10:56.312 "write_zeroes": true, 00:10:56.312 "zcopy": false, 00:10:56.312 "get_zone_info": false, 00:10:56.312 "zone_management": false, 00:10:56.312 "zone_append": false, 00:10:56.312 "compare": false, 00:10:56.312 "compare_and_write": false, 00:10:56.312 "abort": false, 00:10:56.312 "seek_hole": false, 00:10:56.312 "seek_data": false, 00:10:56.312 "copy": false, 00:10:56.312 "nvme_iov_md": false 00:10:56.312 }, 00:10:56.312 "memory_domains": [ 00:10:56.312 { 00:10:56.312 "dma_device_id": "system", 00:10:56.312 "dma_device_type": 1 00:10:56.312 }, 00:10:56.312 { 00:10:56.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.312 "dma_device_type": 2 00:10:56.312 }, 00:10:56.312 { 00:10:56.312 "dma_device_id": "system", 00:10:56.312 "dma_device_type": 1 00:10:56.312 }, 00:10:56.312 { 00:10:56.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.312 "dma_device_type": 2 00:10:56.312 }, 00:10:56.312 { 00:10:56.312 "dma_device_id": "system", 00:10:56.312 "dma_device_type": 1 00:10:56.312 }, 00:10:56.312 { 00:10:56.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.312 "dma_device_type": 2 00:10:56.312 } 00:10:56.312 ], 00:10:56.312 "driver_specific": { 00:10:56.312 "raid": { 00:10:56.312 "uuid": "ee4a6fa1-9873-412e-91b2-f9d934e13e2e", 00:10:56.312 "strip_size_kb": 64, 00:10:56.312 "state": "online", 00:10:56.312 "raid_level": "concat", 00:10:56.312 "superblock": true, 00:10:56.312 "num_base_bdevs": 3, 00:10:56.312 "num_base_bdevs_discovered": 3, 00:10:56.312 "num_base_bdevs_operational": 3, 00:10:56.312 "base_bdevs_list": [ 00:10:56.312 { 00:10:56.312 "name": "pt1", 00:10:56.312 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:56.312 "is_configured": true, 00:10:56.312 "data_offset": 2048, 00:10:56.312 "data_size": 63488 00:10:56.312 }, 00:10:56.312 { 00:10:56.312 "name": "pt2", 00:10:56.312 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:56.312 "is_configured": true, 00:10:56.312 "data_offset": 2048, 00:10:56.312 "data_size": 63488 00:10:56.312 }, 00:10:56.312 { 00:10:56.312 "name": "pt3", 00:10:56.312 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:56.312 "is_configured": true, 00:10:56.312 "data_offset": 2048, 00:10:56.312 "data_size": 63488 00:10:56.312 } 00:10:56.312 ] 00:10:56.312 } 00:10:56.312 } 00:10:56.312 }' 00:10:56.312 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:56.312 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:56.312 pt2 00:10:56.312 pt3' 00:10:56.312 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.312 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:56.312 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.312 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:56.312 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.312 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.312 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.312 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.312 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.312 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.312 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.312 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:56.312 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.312 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.312 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.312 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.312 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.313 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.313 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.313 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:56.313 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.313 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.313 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.572 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.572 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.572 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.572 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:56.572 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:56.572 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.572 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.572 [2024-11-20 14:21:35.337560] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.572 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.572 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ee4a6fa1-9873-412e-91b2-f9d934e13e2e '!=' ee4a6fa1-9873-412e-91b2-f9d934e13e2e ']' 00:10:56.572 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:56.572 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:56.572 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:56.572 14:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66904 00:10:56.572 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66904 ']' 00:10:56.572 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66904 00:10:56.572 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:56.572 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.572 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66904 00:10:56.572 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:56.572 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:56.572 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66904' 00:10:56.572 killing process with pid 66904 00:10:56.572 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66904 00:10:56.572 [2024-11-20 14:21:35.429202] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:56.572 14:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66904 00:10:56.572 [2024-11-20 14:21:35.429428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.572 [2024-11-20 14:21:35.429524] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:56.572 [2024-11-20 14:21:35.429544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:56.836 [2024-11-20 14:21:35.694693] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:57.779 ************************************ 00:10:57.779 END TEST raid_superblock_test 00:10:57.779 ************************************ 00:10:57.779 14:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:57.779 00:10:57.779 real 0m5.698s 00:10:57.779 user 0m8.603s 00:10:57.779 sys 0m0.817s 00:10:57.779 14:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.779 14:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.038 14:21:36 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:10:58.038 14:21:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:58.038 14:21:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.038 14:21:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:58.038 ************************************ 00:10:58.038 START TEST raid_read_error_test 00:10:58.038 ************************************ 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YweL78pJ2K 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67168 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67168 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67168 ']' 00:10:58.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.038 14:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.038 [2024-11-20 14:21:36.927950] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:10:58.039 [2024-11-20 14:21:36.928698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67168 ] 00:10:58.298 [2024-11-20 14:21:37.111348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.298 [2024-11-20 14:21:37.243868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.556 [2024-11-20 14:21:37.450105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.556 [2024-11-20 14:21:37.450183] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.160 BaseBdev1_malloc 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.160 true 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.160 [2024-11-20 14:21:37.929376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:59.160 [2024-11-20 14:21:37.929483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.160 [2024-11-20 14:21:37.929516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:59.160 [2024-11-20 14:21:37.929532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.160 [2024-11-20 14:21:37.932494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.160 [2024-11-20 14:21:37.932556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:59.160 BaseBdev1 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.160 BaseBdev2_malloc 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.160 true 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.160 [2024-11-20 14:21:37.995276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:59.160 [2024-11-20 14:21:37.995608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.160 [2024-11-20 14:21:37.995650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:59.160 [2024-11-20 14:21:37.995670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.160 [2024-11-20 14:21:37.998736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.160 [2024-11-20 14:21:37.998907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:59.160 BaseBdev2 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.160 14:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.160 BaseBdev3_malloc 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.160 true 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.160 [2024-11-20 14:21:38.073992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:59.160 [2024-11-20 14:21:38.074155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.160 [2024-11-20 14:21:38.074188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:59.160 [2024-11-20 14:21:38.074205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.160 [2024-11-20 14:21:38.077130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.160 [2024-11-20 14:21:38.077194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:59.160 BaseBdev3 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.160 [2024-11-20 14:21:38.086164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.160 [2024-11-20 14:21:38.088878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:59.160 [2024-11-20 14:21:38.088979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.160 [2024-11-20 14:21:38.089325] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:59.160 [2024-11-20 14:21:38.089345] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:59.160 [2024-11-20 14:21:38.089708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:59.160 [2024-11-20 14:21:38.090113] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:59.160 [2024-11-20 14:21:38.090158] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:59.160 [2024-11-20 14:21:38.090445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.160 14:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.420 14:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.420 "name": "raid_bdev1", 00:10:59.420 "uuid": "78825568-7c74-4342-8cc4-be42112ec433", 00:10:59.420 "strip_size_kb": 64, 00:10:59.420 "state": "online", 00:10:59.420 "raid_level": "concat", 00:10:59.420 "superblock": true, 00:10:59.420 "num_base_bdevs": 3, 00:10:59.420 "num_base_bdevs_discovered": 3, 00:10:59.420 "num_base_bdevs_operational": 3, 00:10:59.420 "base_bdevs_list": [ 00:10:59.420 { 00:10:59.420 "name": "BaseBdev1", 00:10:59.420 "uuid": "d746baf9-6293-5867-9152-f7deea00e6f0", 00:10:59.420 "is_configured": true, 00:10:59.420 "data_offset": 2048, 00:10:59.420 "data_size": 63488 00:10:59.420 }, 00:10:59.420 { 00:10:59.420 "name": "BaseBdev2", 00:10:59.420 "uuid": "fe3dc165-9f54-5209-b197-4a4f98db89a4", 00:10:59.420 "is_configured": true, 00:10:59.420 "data_offset": 2048, 00:10:59.420 "data_size": 63488 00:10:59.420 }, 00:10:59.420 { 00:10:59.420 "name": "BaseBdev3", 00:10:59.420 "uuid": "153c2c0a-ea9c-54df-8710-eef41901db28", 00:10:59.420 "is_configured": true, 00:10:59.420 "data_offset": 2048, 00:10:59.420 "data_size": 63488 00:10:59.420 } 00:10:59.420 ] 00:10:59.420 }' 00:10:59.420 14:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.420 14:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.679 14:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:59.679 14:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:59.938 [2024-11-20 14:21:38.751970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.877 "name": "raid_bdev1", 00:11:00.877 "uuid": "78825568-7c74-4342-8cc4-be42112ec433", 00:11:00.877 "strip_size_kb": 64, 00:11:00.877 "state": "online", 00:11:00.877 "raid_level": "concat", 00:11:00.877 "superblock": true, 00:11:00.877 "num_base_bdevs": 3, 00:11:00.877 "num_base_bdevs_discovered": 3, 00:11:00.877 "num_base_bdevs_operational": 3, 00:11:00.877 "base_bdevs_list": [ 00:11:00.877 { 00:11:00.877 "name": "BaseBdev1", 00:11:00.877 "uuid": "d746baf9-6293-5867-9152-f7deea00e6f0", 00:11:00.877 "is_configured": true, 00:11:00.877 "data_offset": 2048, 00:11:00.877 "data_size": 63488 00:11:00.877 }, 00:11:00.877 { 00:11:00.877 "name": "BaseBdev2", 00:11:00.877 "uuid": "fe3dc165-9f54-5209-b197-4a4f98db89a4", 00:11:00.877 "is_configured": true, 00:11:00.877 "data_offset": 2048, 00:11:00.877 "data_size": 63488 00:11:00.877 }, 00:11:00.877 { 00:11:00.877 "name": "BaseBdev3", 00:11:00.877 "uuid": "153c2c0a-ea9c-54df-8710-eef41901db28", 00:11:00.877 "is_configured": true, 00:11:00.877 "data_offset": 2048, 00:11:00.877 "data_size": 63488 00:11:00.877 } 00:11:00.877 ] 00:11:00.877 }' 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.877 14:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.447 14:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:01.447 14:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.447 14:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.447 [2024-11-20 14:21:40.146351] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:01.447 [2024-11-20 14:21:40.146385] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:01.447 [2024-11-20 14:21:40.149674] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.447 [2024-11-20 14:21:40.149867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.447 [2024-11-20 14:21:40.149939] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:01.447 [2024-11-20 14:21:40.149957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:01.447 { 00:11:01.447 "results": [ 00:11:01.447 { 00:11:01.447 "job": "raid_bdev1", 00:11:01.447 "core_mask": "0x1", 00:11:01.447 "workload": "randrw", 00:11:01.447 "percentage": 50, 00:11:01.447 "status": "finished", 00:11:01.447 "queue_depth": 1, 00:11:01.447 "io_size": 131072, 00:11:01.447 "runtime": 1.391829, 00:11:01.447 "iops": 11179.534267499816, 00:11:01.447 "mibps": 1397.441783437477, 00:11:01.447 "io_failed": 1, 00:11:01.447 "io_timeout": 0, 00:11:01.447 "avg_latency_us": 124.30015084330873, 00:11:01.447 "min_latency_us": 37.70181818181818, 00:11:01.447 "max_latency_us": 1869.2654545454545 00:11:01.447 } 00:11:01.447 ], 00:11:01.447 "core_count": 1 00:11:01.447 } 00:11:01.447 14:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.447 14:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67168 00:11:01.447 14:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67168 ']' 00:11:01.447 14:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67168 00:11:01.447 14:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:01.447 14:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.447 14:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67168 00:11:01.447 killing process with pid 67168 00:11:01.447 14:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.447 14:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.447 14:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67168' 00:11:01.447 14:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67168 00:11:01.447 [2024-11-20 14:21:40.186314] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:01.447 14:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67168 00:11:01.447 [2024-11-20 14:21:40.394486] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:02.827 14:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YweL78pJ2K 00:11:02.827 14:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:02.827 14:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:02.827 14:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:02.827 14:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:02.827 14:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:02.827 14:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:02.827 14:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:02.827 ************************************ 00:11:02.827 END TEST raid_read_error_test 00:11:02.827 ************************************ 00:11:02.827 00:11:02.827 real 0m4.689s 00:11:02.827 user 0m5.789s 00:11:02.827 sys 0m0.594s 00:11:02.827 14:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.827 14:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.827 14:21:41 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:02.827 14:21:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:02.827 14:21:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.827 14:21:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:02.827 ************************************ 00:11:02.827 START TEST raid_write_error_test 00:11:02.827 ************************************ 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tYgSjjIZpN 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67308 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67308 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67308 ']' 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.827 14:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.827 [2024-11-20 14:21:41.666351] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:11:02.827 [2024-11-20 14:21:41.666743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67308 ] 00:11:03.172 [2024-11-20 14:21:41.848725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.172 [2024-11-20 14:21:41.974075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.432 [2024-11-20 14:21:42.173178] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.432 [2024-11-20 14:21:42.173217] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.691 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.691 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:03.692 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.692 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:03.692 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.692 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.692 BaseBdev1_malloc 00:11:03.692 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.692 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:03.692 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.692 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.692 true 00:11:03.692 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.692 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:03.692 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.692 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.692 [2024-11-20 14:21:42.669097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:03.692 [2024-11-20 14:21:42.669166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.692 [2024-11-20 14:21:42.669194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:03.692 [2024-11-20 14:21:42.669212] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.692 [2024-11-20 14:21:42.671959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.692 [2024-11-20 14:21:42.672020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:03.952 BaseBdev1 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.952 BaseBdev2_malloc 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.952 true 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.952 [2024-11-20 14:21:42.724911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:03.952 [2024-11-20 14:21:42.724979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.952 [2024-11-20 14:21:42.725027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:03.952 [2024-11-20 14:21:42.725057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.952 [2024-11-20 14:21:42.727785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.952 [2024-11-20 14:21:42.727849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:03.952 BaseBdev2 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.952 BaseBdev3_malloc 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.952 true 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.952 [2024-11-20 14:21:42.789879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:03.952 [2024-11-20 14:21:42.789945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.952 [2024-11-20 14:21:42.789971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:03.952 [2024-11-20 14:21:42.790000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.952 [2024-11-20 14:21:42.792839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.952 [2024-11-20 14:21:42.793021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:03.952 BaseBdev3 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.952 [2024-11-20 14:21:42.797981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.952 [2024-11-20 14:21:42.800392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:03.952 [2024-11-20 14:21:42.800505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:03.952 [2024-11-20 14:21:42.800776] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:03.952 [2024-11-20 14:21:42.800802] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:03.952 [2024-11-20 14:21:42.801128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:03.952 [2024-11-20 14:21:42.801350] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:03.952 [2024-11-20 14:21:42.801378] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:03.952 [2024-11-20 14:21:42.801554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.952 "name": "raid_bdev1", 00:11:03.952 "uuid": "e48133a8-5fe9-4621-a821-487ae791129a", 00:11:03.952 "strip_size_kb": 64, 00:11:03.952 "state": "online", 00:11:03.952 "raid_level": "concat", 00:11:03.952 "superblock": true, 00:11:03.952 "num_base_bdevs": 3, 00:11:03.952 "num_base_bdevs_discovered": 3, 00:11:03.952 "num_base_bdevs_operational": 3, 00:11:03.952 "base_bdevs_list": [ 00:11:03.952 { 00:11:03.952 "name": "BaseBdev1", 00:11:03.952 "uuid": "16bfe9e1-c5be-5a69-9f27-50ef93f9ba87", 00:11:03.952 "is_configured": true, 00:11:03.952 "data_offset": 2048, 00:11:03.952 "data_size": 63488 00:11:03.952 }, 00:11:03.952 { 00:11:03.952 "name": "BaseBdev2", 00:11:03.952 "uuid": "870a19b6-48fb-5e32-92e5-68e2b1c5f511", 00:11:03.952 "is_configured": true, 00:11:03.952 "data_offset": 2048, 00:11:03.952 "data_size": 63488 00:11:03.952 }, 00:11:03.952 { 00:11:03.952 "name": "BaseBdev3", 00:11:03.952 "uuid": "a09c5b0d-1b7a-5529-a7de-dedd00b63e99", 00:11:03.952 "is_configured": true, 00:11:03.952 "data_offset": 2048, 00:11:03.952 "data_size": 63488 00:11:03.952 } 00:11:03.952 ] 00:11:03.952 }' 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.952 14:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.524 14:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:04.524 14:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:04.524 [2024-11-20 14:21:43.451526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:05.459 14:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:05.459 14:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.459 14:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.459 14:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.459 14:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:05.459 14:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:05.459 14:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:05.459 14:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:05.459 14:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.459 14:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.459 14:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.459 14:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.459 14:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.459 14:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.459 14:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.459 14:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.459 14:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.459 14:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.459 14:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.459 14:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.459 14:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.459 14:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.459 14:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.459 "name": "raid_bdev1", 00:11:05.459 "uuid": "e48133a8-5fe9-4621-a821-487ae791129a", 00:11:05.459 "strip_size_kb": 64, 00:11:05.459 "state": "online", 00:11:05.459 "raid_level": "concat", 00:11:05.459 "superblock": true, 00:11:05.459 "num_base_bdevs": 3, 00:11:05.459 "num_base_bdevs_discovered": 3, 00:11:05.459 "num_base_bdevs_operational": 3, 00:11:05.459 "base_bdevs_list": [ 00:11:05.459 { 00:11:05.459 "name": "BaseBdev1", 00:11:05.459 "uuid": "16bfe9e1-c5be-5a69-9f27-50ef93f9ba87", 00:11:05.459 "is_configured": true, 00:11:05.459 "data_offset": 2048, 00:11:05.459 "data_size": 63488 00:11:05.459 }, 00:11:05.459 { 00:11:05.459 "name": "BaseBdev2", 00:11:05.460 "uuid": "870a19b6-48fb-5e32-92e5-68e2b1c5f511", 00:11:05.460 "is_configured": true, 00:11:05.460 "data_offset": 2048, 00:11:05.460 "data_size": 63488 00:11:05.460 }, 00:11:05.460 { 00:11:05.460 "name": "BaseBdev3", 00:11:05.460 "uuid": "a09c5b0d-1b7a-5529-a7de-dedd00b63e99", 00:11:05.460 "is_configured": true, 00:11:05.460 "data_offset": 2048, 00:11:05.460 "data_size": 63488 00:11:05.460 } 00:11:05.460 ] 00:11:05.460 }' 00:11:05.460 14:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.460 14:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.027 14:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:06.027 14:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.027 14:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.027 [2024-11-20 14:21:44.845817] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:06.027 [2024-11-20 14:21:44.845999] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.027 [2024-11-20 14:21:44.849459] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.027 [2024-11-20 14:21:44.849646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.027 [2024-11-20 14:21:44.849743] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:11:06.027 "results": [ 00:11:06.027 { 00:11:06.027 "job": "raid_bdev1", 00:11:06.027 "core_mask": "0x1", 00:11:06.027 "workload": "randrw", 00:11:06.027 "percentage": 50, 00:11:06.027 "status": "finished", 00:11:06.027 "queue_depth": 1, 00:11:06.027 "io_size": 131072, 00:11:06.027 "runtime": 1.392142, 00:11:06.027 "iops": 11140.386541028141, 00:11:06.027 "mibps": 1392.5483176285177, 00:11:06.027 "io_failed": 1, 00:11:06.027 "io_timeout": 0, 00:11:06.027 "avg_latency_us": 124.72808112068459, 00:11:06.027 "min_latency_us": 39.79636363636364, 00:11:06.027 "max_latency_us": 1832.0290909090909 00:11:06.027 } 00:11:06.027 ], 00:11:06.027 "core_count": 1 00:11:06.027 } 00:11:06.027 ee all in destruct 00:11:06.027 [2024-11-20 14:21:44.849983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:06.027 14:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.027 14:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67308 00:11:06.027 14:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67308 ']' 00:11:06.027 14:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67308 00:11:06.027 14:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:06.027 14:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.027 14:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67308 00:11:06.027 killing process with pid 67308 00:11:06.027 14:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.027 14:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.027 14:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67308' 00:11:06.027 14:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67308 00:11:06.027 [2024-11-20 14:21:44.888816] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:06.027 14:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67308 00:11:06.286 [2024-11-20 14:21:45.090225] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:07.221 14:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tYgSjjIZpN 00:11:07.221 14:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:07.221 14:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:07.480 14:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:07.480 14:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:07.480 14:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:07.480 14:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:07.480 14:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:07.480 00:11:07.480 real 0m4.653s 00:11:07.480 user 0m5.757s 00:11:07.480 sys 0m0.564s 00:11:07.480 ************************************ 00:11:07.480 END TEST raid_write_error_test 00:11:07.480 ************************************ 00:11:07.480 14:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.480 14:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.480 14:21:46 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:07.480 14:21:46 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:07.480 14:21:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:07.480 14:21:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.480 14:21:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:07.480 ************************************ 00:11:07.480 START TEST raid_state_function_test 00:11:07.480 ************************************ 00:11:07.480 14:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:11:07.480 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:07.481 Process raid pid: 67458 00:11:07.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67458 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67458' 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67458 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67458 ']' 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.481 14:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.481 [2024-11-20 14:21:46.369391] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:11:07.481 [2024-11-20 14:21:46.370322] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.739 [2024-11-20 14:21:46.557137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.739 [2024-11-20 14:21:46.689612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.997 [2024-11-20 14:21:46.918101] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.997 [2024-11-20 14:21:46.918324] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.565 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.565 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:08.565 14:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:08.565 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.565 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.565 [2024-11-20 14:21:47.418214] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:08.565 [2024-11-20 14:21:47.418428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:08.565 [2024-11-20 14:21:47.418566] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:08.565 [2024-11-20 14:21:47.418630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:08.565 [2024-11-20 14:21:47.418782] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:08.565 [2024-11-20 14:21:47.418817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:08.565 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.565 14:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:08.565 14:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.565 14:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.565 14:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.565 14:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.565 14:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:08.565 14:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.565 14:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.565 14:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.565 14:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.566 14:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.566 14:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.566 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.566 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.566 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.566 14:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.566 "name": "Existed_Raid", 00:11:08.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.566 "strip_size_kb": 0, 00:11:08.566 "state": "configuring", 00:11:08.566 "raid_level": "raid1", 00:11:08.566 "superblock": false, 00:11:08.566 "num_base_bdevs": 3, 00:11:08.566 "num_base_bdevs_discovered": 0, 00:11:08.566 "num_base_bdevs_operational": 3, 00:11:08.566 "base_bdevs_list": [ 00:11:08.566 { 00:11:08.566 "name": "BaseBdev1", 00:11:08.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.566 "is_configured": false, 00:11:08.566 "data_offset": 0, 00:11:08.566 "data_size": 0 00:11:08.566 }, 00:11:08.566 { 00:11:08.566 "name": "BaseBdev2", 00:11:08.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.566 "is_configured": false, 00:11:08.566 "data_offset": 0, 00:11:08.566 "data_size": 0 00:11:08.566 }, 00:11:08.566 { 00:11:08.566 "name": "BaseBdev3", 00:11:08.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.566 "is_configured": false, 00:11:08.566 "data_offset": 0, 00:11:08.566 "data_size": 0 00:11:08.566 } 00:11:08.566 ] 00:11:08.566 }' 00:11:08.566 14:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.566 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.135 14:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:09.135 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.135 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.135 [2024-11-20 14:21:47.934708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:09.135 [2024-11-20 14:21:47.934774] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:09.135 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.135 14:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:09.135 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.135 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.135 [2024-11-20 14:21:47.946713] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:09.135 [2024-11-20 14:21:47.946925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:09.135 [2024-11-20 14:21:47.947079] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:09.135 [2024-11-20 14:21:47.947161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:09.135 [2024-11-20 14:21:47.947358] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:09.135 [2024-11-20 14:21:47.947392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:09.135 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.135 14:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:09.135 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.135 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.135 [2024-11-20 14:21:47.995905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.135 BaseBdev1 00:11:09.135 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.135 14:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:09.135 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:09.135 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.135 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:09.135 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.135 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.135 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.135 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.135 14:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.135 14:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.135 14:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:09.135 14:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.135 14:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.135 [ 00:11:09.135 { 00:11:09.135 "name": "BaseBdev1", 00:11:09.135 "aliases": [ 00:11:09.135 "a72166c7-2114-42a7-b80e-d53b5aa37edc" 00:11:09.135 ], 00:11:09.135 "product_name": "Malloc disk", 00:11:09.135 "block_size": 512, 00:11:09.135 "num_blocks": 65536, 00:11:09.135 "uuid": "a72166c7-2114-42a7-b80e-d53b5aa37edc", 00:11:09.135 "assigned_rate_limits": { 00:11:09.135 "rw_ios_per_sec": 0, 00:11:09.135 "rw_mbytes_per_sec": 0, 00:11:09.135 "r_mbytes_per_sec": 0, 00:11:09.135 "w_mbytes_per_sec": 0 00:11:09.135 }, 00:11:09.135 "claimed": true, 00:11:09.135 "claim_type": "exclusive_write", 00:11:09.135 "zoned": false, 00:11:09.135 "supported_io_types": { 00:11:09.135 "read": true, 00:11:09.135 "write": true, 00:11:09.135 "unmap": true, 00:11:09.135 "flush": true, 00:11:09.135 "reset": true, 00:11:09.135 "nvme_admin": false, 00:11:09.135 "nvme_io": false, 00:11:09.135 "nvme_io_md": false, 00:11:09.135 "write_zeroes": true, 00:11:09.135 "zcopy": true, 00:11:09.135 "get_zone_info": false, 00:11:09.135 "zone_management": false, 00:11:09.135 "zone_append": false, 00:11:09.135 "compare": false, 00:11:09.135 "compare_and_write": false, 00:11:09.135 "abort": true, 00:11:09.135 "seek_hole": false, 00:11:09.135 "seek_data": false, 00:11:09.135 "copy": true, 00:11:09.135 "nvme_iov_md": false 00:11:09.135 }, 00:11:09.135 "memory_domains": [ 00:11:09.135 { 00:11:09.135 "dma_device_id": "system", 00:11:09.135 "dma_device_type": 1 00:11:09.135 }, 00:11:09.135 { 00:11:09.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.135 "dma_device_type": 2 00:11:09.135 } 00:11:09.135 ], 00:11:09.135 "driver_specific": {} 00:11:09.135 } 00:11:09.135 ] 00:11:09.135 14:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.135 14:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:09.135 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:09.135 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.135 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.135 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.135 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.135 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.135 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.135 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.135 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.135 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.135 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.135 14:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.135 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.135 14:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.135 14:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.135 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.135 "name": "Existed_Raid", 00:11:09.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.135 "strip_size_kb": 0, 00:11:09.135 "state": "configuring", 00:11:09.135 "raid_level": "raid1", 00:11:09.135 "superblock": false, 00:11:09.135 "num_base_bdevs": 3, 00:11:09.135 "num_base_bdevs_discovered": 1, 00:11:09.135 "num_base_bdevs_operational": 3, 00:11:09.135 "base_bdevs_list": [ 00:11:09.135 { 00:11:09.135 "name": "BaseBdev1", 00:11:09.135 "uuid": "a72166c7-2114-42a7-b80e-d53b5aa37edc", 00:11:09.135 "is_configured": true, 00:11:09.135 "data_offset": 0, 00:11:09.135 "data_size": 65536 00:11:09.135 }, 00:11:09.135 { 00:11:09.135 "name": "BaseBdev2", 00:11:09.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.135 "is_configured": false, 00:11:09.135 "data_offset": 0, 00:11:09.135 "data_size": 0 00:11:09.135 }, 00:11:09.135 { 00:11:09.135 "name": "BaseBdev3", 00:11:09.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.135 "is_configured": false, 00:11:09.135 "data_offset": 0, 00:11:09.135 "data_size": 0 00:11:09.135 } 00:11:09.135 ] 00:11:09.135 }' 00:11:09.135 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.135 14:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.704 [2024-11-20 14:21:48.564167] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:09.704 [2024-11-20 14:21:48.564232] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.704 [2024-11-20 14:21:48.572190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.704 [2024-11-20 14:21:48.574908] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:09.704 [2024-11-20 14:21:48.575119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:09.704 [2024-11-20 14:21:48.575270] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:09.704 [2024-11-20 14:21:48.575440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.704 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.704 "name": "Existed_Raid", 00:11:09.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.704 "strip_size_kb": 0, 00:11:09.704 "state": "configuring", 00:11:09.704 "raid_level": "raid1", 00:11:09.704 "superblock": false, 00:11:09.704 "num_base_bdevs": 3, 00:11:09.704 "num_base_bdevs_discovered": 1, 00:11:09.704 "num_base_bdevs_operational": 3, 00:11:09.704 "base_bdevs_list": [ 00:11:09.704 { 00:11:09.704 "name": "BaseBdev1", 00:11:09.704 "uuid": "a72166c7-2114-42a7-b80e-d53b5aa37edc", 00:11:09.704 "is_configured": true, 00:11:09.704 "data_offset": 0, 00:11:09.704 "data_size": 65536 00:11:09.704 }, 00:11:09.704 { 00:11:09.704 "name": "BaseBdev2", 00:11:09.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.704 "is_configured": false, 00:11:09.704 "data_offset": 0, 00:11:09.704 "data_size": 0 00:11:09.704 }, 00:11:09.704 { 00:11:09.704 "name": "BaseBdev3", 00:11:09.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.704 "is_configured": false, 00:11:09.704 "data_offset": 0, 00:11:09.704 "data_size": 0 00:11:09.704 } 00:11:09.705 ] 00:11:09.705 }' 00:11:09.705 14:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.705 14:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.272 [2024-11-20 14:21:49.144616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:10.272 BaseBdev2 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.272 [ 00:11:10.272 { 00:11:10.272 "name": "BaseBdev2", 00:11:10.272 "aliases": [ 00:11:10.272 "550fd059-aa8c-41e5-b7b0-5fa7d7c89492" 00:11:10.272 ], 00:11:10.272 "product_name": "Malloc disk", 00:11:10.272 "block_size": 512, 00:11:10.272 "num_blocks": 65536, 00:11:10.272 "uuid": "550fd059-aa8c-41e5-b7b0-5fa7d7c89492", 00:11:10.272 "assigned_rate_limits": { 00:11:10.272 "rw_ios_per_sec": 0, 00:11:10.272 "rw_mbytes_per_sec": 0, 00:11:10.272 "r_mbytes_per_sec": 0, 00:11:10.272 "w_mbytes_per_sec": 0 00:11:10.272 }, 00:11:10.272 "claimed": true, 00:11:10.272 "claim_type": "exclusive_write", 00:11:10.272 "zoned": false, 00:11:10.272 "supported_io_types": { 00:11:10.272 "read": true, 00:11:10.272 "write": true, 00:11:10.272 "unmap": true, 00:11:10.272 "flush": true, 00:11:10.272 "reset": true, 00:11:10.272 "nvme_admin": false, 00:11:10.272 "nvme_io": false, 00:11:10.272 "nvme_io_md": false, 00:11:10.272 "write_zeroes": true, 00:11:10.272 "zcopy": true, 00:11:10.272 "get_zone_info": false, 00:11:10.272 "zone_management": false, 00:11:10.272 "zone_append": false, 00:11:10.272 "compare": false, 00:11:10.272 "compare_and_write": false, 00:11:10.272 "abort": true, 00:11:10.272 "seek_hole": false, 00:11:10.272 "seek_data": false, 00:11:10.272 "copy": true, 00:11:10.272 "nvme_iov_md": false 00:11:10.272 }, 00:11:10.272 "memory_domains": [ 00:11:10.272 { 00:11:10.272 "dma_device_id": "system", 00:11:10.272 "dma_device_type": 1 00:11:10.272 }, 00:11:10.272 { 00:11:10.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.272 "dma_device_type": 2 00:11:10.272 } 00:11:10.272 ], 00:11:10.272 "driver_specific": {} 00:11:10.272 } 00:11:10.272 ] 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.272 "name": "Existed_Raid", 00:11:10.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.272 "strip_size_kb": 0, 00:11:10.272 "state": "configuring", 00:11:10.272 "raid_level": "raid1", 00:11:10.272 "superblock": false, 00:11:10.272 "num_base_bdevs": 3, 00:11:10.272 "num_base_bdevs_discovered": 2, 00:11:10.272 "num_base_bdevs_operational": 3, 00:11:10.272 "base_bdevs_list": [ 00:11:10.272 { 00:11:10.272 "name": "BaseBdev1", 00:11:10.272 "uuid": "a72166c7-2114-42a7-b80e-d53b5aa37edc", 00:11:10.272 "is_configured": true, 00:11:10.272 "data_offset": 0, 00:11:10.272 "data_size": 65536 00:11:10.272 }, 00:11:10.272 { 00:11:10.272 "name": "BaseBdev2", 00:11:10.272 "uuid": "550fd059-aa8c-41e5-b7b0-5fa7d7c89492", 00:11:10.272 "is_configured": true, 00:11:10.272 "data_offset": 0, 00:11:10.272 "data_size": 65536 00:11:10.272 }, 00:11:10.272 { 00:11:10.272 "name": "BaseBdev3", 00:11:10.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.272 "is_configured": false, 00:11:10.272 "data_offset": 0, 00:11:10.272 "data_size": 0 00:11:10.272 } 00:11:10.272 ] 00:11:10.272 }' 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.272 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.839 [2024-11-20 14:21:49.732180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:10.839 [2024-11-20 14:21:49.732388] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:10.839 [2024-11-20 14:21:49.732424] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:10.839 [2024-11-20 14:21:49.732784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:10.839 [2024-11-20 14:21:49.733047] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:10.839 [2024-11-20 14:21:49.733066] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:10.839 [2024-11-20 14:21:49.733379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.839 BaseBdev3 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.839 [ 00:11:10.839 { 00:11:10.839 "name": "BaseBdev3", 00:11:10.839 "aliases": [ 00:11:10.839 "36d319f4-e6d5-4835-bc48-0c79ff0b0417" 00:11:10.839 ], 00:11:10.839 "product_name": "Malloc disk", 00:11:10.839 "block_size": 512, 00:11:10.839 "num_blocks": 65536, 00:11:10.839 "uuid": "36d319f4-e6d5-4835-bc48-0c79ff0b0417", 00:11:10.839 "assigned_rate_limits": { 00:11:10.839 "rw_ios_per_sec": 0, 00:11:10.839 "rw_mbytes_per_sec": 0, 00:11:10.839 "r_mbytes_per_sec": 0, 00:11:10.839 "w_mbytes_per_sec": 0 00:11:10.839 }, 00:11:10.839 "claimed": true, 00:11:10.839 "claim_type": "exclusive_write", 00:11:10.839 "zoned": false, 00:11:10.839 "supported_io_types": { 00:11:10.839 "read": true, 00:11:10.839 "write": true, 00:11:10.839 "unmap": true, 00:11:10.839 "flush": true, 00:11:10.839 "reset": true, 00:11:10.839 "nvme_admin": false, 00:11:10.839 "nvme_io": false, 00:11:10.839 "nvme_io_md": false, 00:11:10.839 "write_zeroes": true, 00:11:10.839 "zcopy": true, 00:11:10.839 "get_zone_info": false, 00:11:10.839 "zone_management": false, 00:11:10.839 "zone_append": false, 00:11:10.839 "compare": false, 00:11:10.839 "compare_and_write": false, 00:11:10.839 "abort": true, 00:11:10.839 "seek_hole": false, 00:11:10.839 "seek_data": false, 00:11:10.839 "copy": true, 00:11:10.839 "nvme_iov_md": false 00:11:10.839 }, 00:11:10.839 "memory_domains": [ 00:11:10.839 { 00:11:10.839 "dma_device_id": "system", 00:11:10.839 "dma_device_type": 1 00:11:10.839 }, 00:11:10.839 { 00:11:10.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.839 "dma_device_type": 2 00:11:10.839 } 00:11:10.839 ], 00:11:10.839 "driver_specific": {} 00:11:10.839 } 00:11:10.839 ] 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.839 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.839 "name": "Existed_Raid", 00:11:10.839 "uuid": "377efcee-1ea1-40e1-90a8-145969c5e208", 00:11:10.839 "strip_size_kb": 0, 00:11:10.839 "state": "online", 00:11:10.839 "raid_level": "raid1", 00:11:10.839 "superblock": false, 00:11:10.839 "num_base_bdevs": 3, 00:11:10.839 "num_base_bdevs_discovered": 3, 00:11:10.839 "num_base_bdevs_operational": 3, 00:11:10.839 "base_bdevs_list": [ 00:11:10.839 { 00:11:10.839 "name": "BaseBdev1", 00:11:10.839 "uuid": "a72166c7-2114-42a7-b80e-d53b5aa37edc", 00:11:10.839 "is_configured": true, 00:11:10.840 "data_offset": 0, 00:11:10.840 "data_size": 65536 00:11:10.840 }, 00:11:10.840 { 00:11:10.840 "name": "BaseBdev2", 00:11:10.840 "uuid": "550fd059-aa8c-41e5-b7b0-5fa7d7c89492", 00:11:10.840 "is_configured": true, 00:11:10.840 "data_offset": 0, 00:11:10.840 "data_size": 65536 00:11:10.840 }, 00:11:10.840 { 00:11:10.840 "name": "BaseBdev3", 00:11:10.840 "uuid": "36d319f4-e6d5-4835-bc48-0c79ff0b0417", 00:11:10.840 "is_configured": true, 00:11:10.840 "data_offset": 0, 00:11:10.840 "data_size": 65536 00:11:10.840 } 00:11:10.840 ] 00:11:10.840 }' 00:11:10.840 14:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.840 14:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.407 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:11.407 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:11.407 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:11.407 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:11.407 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:11.407 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:11.407 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:11.407 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:11.407 14:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.407 14:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.407 [2024-11-20 14:21:50.268790] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.407 14:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.407 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:11.407 "name": "Existed_Raid", 00:11:11.407 "aliases": [ 00:11:11.407 "377efcee-1ea1-40e1-90a8-145969c5e208" 00:11:11.407 ], 00:11:11.407 "product_name": "Raid Volume", 00:11:11.407 "block_size": 512, 00:11:11.407 "num_blocks": 65536, 00:11:11.407 "uuid": "377efcee-1ea1-40e1-90a8-145969c5e208", 00:11:11.407 "assigned_rate_limits": { 00:11:11.407 "rw_ios_per_sec": 0, 00:11:11.407 "rw_mbytes_per_sec": 0, 00:11:11.407 "r_mbytes_per_sec": 0, 00:11:11.407 "w_mbytes_per_sec": 0 00:11:11.407 }, 00:11:11.407 "claimed": false, 00:11:11.407 "zoned": false, 00:11:11.407 "supported_io_types": { 00:11:11.407 "read": true, 00:11:11.407 "write": true, 00:11:11.407 "unmap": false, 00:11:11.408 "flush": false, 00:11:11.408 "reset": true, 00:11:11.408 "nvme_admin": false, 00:11:11.408 "nvme_io": false, 00:11:11.408 "nvme_io_md": false, 00:11:11.408 "write_zeroes": true, 00:11:11.408 "zcopy": false, 00:11:11.408 "get_zone_info": false, 00:11:11.408 "zone_management": false, 00:11:11.408 "zone_append": false, 00:11:11.408 "compare": false, 00:11:11.408 "compare_and_write": false, 00:11:11.408 "abort": false, 00:11:11.408 "seek_hole": false, 00:11:11.408 "seek_data": false, 00:11:11.408 "copy": false, 00:11:11.408 "nvme_iov_md": false 00:11:11.408 }, 00:11:11.408 "memory_domains": [ 00:11:11.408 { 00:11:11.408 "dma_device_id": "system", 00:11:11.408 "dma_device_type": 1 00:11:11.408 }, 00:11:11.408 { 00:11:11.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.408 "dma_device_type": 2 00:11:11.408 }, 00:11:11.408 { 00:11:11.408 "dma_device_id": "system", 00:11:11.408 "dma_device_type": 1 00:11:11.408 }, 00:11:11.408 { 00:11:11.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.408 "dma_device_type": 2 00:11:11.408 }, 00:11:11.408 { 00:11:11.408 "dma_device_id": "system", 00:11:11.408 "dma_device_type": 1 00:11:11.408 }, 00:11:11.408 { 00:11:11.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.408 "dma_device_type": 2 00:11:11.408 } 00:11:11.408 ], 00:11:11.408 "driver_specific": { 00:11:11.408 "raid": { 00:11:11.408 "uuid": "377efcee-1ea1-40e1-90a8-145969c5e208", 00:11:11.408 "strip_size_kb": 0, 00:11:11.408 "state": "online", 00:11:11.408 "raid_level": "raid1", 00:11:11.408 "superblock": false, 00:11:11.408 "num_base_bdevs": 3, 00:11:11.408 "num_base_bdevs_discovered": 3, 00:11:11.408 "num_base_bdevs_operational": 3, 00:11:11.408 "base_bdevs_list": [ 00:11:11.408 { 00:11:11.408 "name": "BaseBdev1", 00:11:11.408 "uuid": "a72166c7-2114-42a7-b80e-d53b5aa37edc", 00:11:11.408 "is_configured": true, 00:11:11.408 "data_offset": 0, 00:11:11.408 "data_size": 65536 00:11:11.408 }, 00:11:11.408 { 00:11:11.408 "name": "BaseBdev2", 00:11:11.408 "uuid": "550fd059-aa8c-41e5-b7b0-5fa7d7c89492", 00:11:11.408 "is_configured": true, 00:11:11.408 "data_offset": 0, 00:11:11.408 "data_size": 65536 00:11:11.408 }, 00:11:11.408 { 00:11:11.408 "name": "BaseBdev3", 00:11:11.408 "uuid": "36d319f4-e6d5-4835-bc48-0c79ff0b0417", 00:11:11.408 "is_configured": true, 00:11:11.408 "data_offset": 0, 00:11:11.408 "data_size": 65536 00:11:11.408 } 00:11:11.408 ] 00:11:11.408 } 00:11:11.408 } 00:11:11.408 }' 00:11:11.408 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:11.408 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:11.408 BaseBdev2 00:11:11.408 BaseBdev3' 00:11:11.408 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.667 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:11.667 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.667 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.667 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:11.667 14:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.667 14:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.667 14:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.667 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.667 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.667 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.667 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.668 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:11.668 14:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.668 14:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.668 14:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.668 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.668 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.668 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.668 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:11.668 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.668 14:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.668 14:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.668 14:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.668 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.668 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.668 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:11.668 14:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.668 14:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.668 [2024-11-20 14:21:50.600555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:11.927 14:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.927 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:11.927 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:11.927 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:11.927 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:11.927 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:11.927 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:11.927 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.927 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.927 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.927 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.927 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:11.927 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.927 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.927 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.927 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.927 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.927 14:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.927 14:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.927 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.927 14:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.927 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.927 "name": "Existed_Raid", 00:11:11.927 "uuid": "377efcee-1ea1-40e1-90a8-145969c5e208", 00:11:11.927 "strip_size_kb": 0, 00:11:11.927 "state": "online", 00:11:11.927 "raid_level": "raid1", 00:11:11.927 "superblock": false, 00:11:11.927 "num_base_bdevs": 3, 00:11:11.927 "num_base_bdevs_discovered": 2, 00:11:11.927 "num_base_bdevs_operational": 2, 00:11:11.927 "base_bdevs_list": [ 00:11:11.927 { 00:11:11.927 "name": null, 00:11:11.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.927 "is_configured": false, 00:11:11.927 "data_offset": 0, 00:11:11.927 "data_size": 65536 00:11:11.927 }, 00:11:11.927 { 00:11:11.927 "name": "BaseBdev2", 00:11:11.927 "uuid": "550fd059-aa8c-41e5-b7b0-5fa7d7c89492", 00:11:11.927 "is_configured": true, 00:11:11.927 "data_offset": 0, 00:11:11.927 "data_size": 65536 00:11:11.927 }, 00:11:11.927 { 00:11:11.927 "name": "BaseBdev3", 00:11:11.927 "uuid": "36d319f4-e6d5-4835-bc48-0c79ff0b0417", 00:11:11.927 "is_configured": true, 00:11:11.927 "data_offset": 0, 00:11:11.927 "data_size": 65536 00:11:11.927 } 00:11:11.927 ] 00:11:11.927 }' 00:11:11.927 14:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.927 14:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.534 [2024-11-20 14:21:51.238031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.534 [2024-11-20 14:21:51.378736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:12.534 [2024-11-20 14:21:51.378862] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:12.534 [2024-11-20 14:21:51.462103] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.534 [2024-11-20 14:21:51.462363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.534 [2024-11-20 14:21:51.462563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:12.534 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.793 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:12.793 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:12.793 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:12.793 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:12.793 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:12.793 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:12.793 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.793 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.793 BaseBdev2 00:11:12.793 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.793 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:12.793 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:12.793 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.793 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:12.793 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.793 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.793 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.793 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.793 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.793 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.793 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:12.793 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.793 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.793 [ 00:11:12.793 { 00:11:12.793 "name": "BaseBdev2", 00:11:12.793 "aliases": [ 00:11:12.793 "fac3ba0f-62ca-41f1-938b-2c43fee2f47b" 00:11:12.793 ], 00:11:12.793 "product_name": "Malloc disk", 00:11:12.793 "block_size": 512, 00:11:12.793 "num_blocks": 65536, 00:11:12.793 "uuid": "fac3ba0f-62ca-41f1-938b-2c43fee2f47b", 00:11:12.793 "assigned_rate_limits": { 00:11:12.793 "rw_ios_per_sec": 0, 00:11:12.793 "rw_mbytes_per_sec": 0, 00:11:12.793 "r_mbytes_per_sec": 0, 00:11:12.793 "w_mbytes_per_sec": 0 00:11:12.793 }, 00:11:12.793 "claimed": false, 00:11:12.793 "zoned": false, 00:11:12.793 "supported_io_types": { 00:11:12.793 "read": true, 00:11:12.793 "write": true, 00:11:12.793 "unmap": true, 00:11:12.793 "flush": true, 00:11:12.793 "reset": true, 00:11:12.793 "nvme_admin": false, 00:11:12.793 "nvme_io": false, 00:11:12.793 "nvme_io_md": false, 00:11:12.794 "write_zeroes": true, 00:11:12.794 "zcopy": true, 00:11:12.794 "get_zone_info": false, 00:11:12.794 "zone_management": false, 00:11:12.794 "zone_append": false, 00:11:12.794 "compare": false, 00:11:12.794 "compare_and_write": false, 00:11:12.794 "abort": true, 00:11:12.794 "seek_hole": false, 00:11:12.794 "seek_data": false, 00:11:12.794 "copy": true, 00:11:12.794 "nvme_iov_md": false 00:11:12.794 }, 00:11:12.794 "memory_domains": [ 00:11:12.794 { 00:11:12.794 "dma_device_id": "system", 00:11:12.794 "dma_device_type": 1 00:11:12.794 }, 00:11:12.794 { 00:11:12.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.794 "dma_device_type": 2 00:11:12.794 } 00:11:12.794 ], 00:11:12.794 "driver_specific": {} 00:11:12.794 } 00:11:12.794 ] 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.794 BaseBdev3 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.794 [ 00:11:12.794 { 00:11:12.794 "name": "BaseBdev3", 00:11:12.794 "aliases": [ 00:11:12.794 "12d1ccaf-bfa9-4adb-b070-a3c37903abe4" 00:11:12.794 ], 00:11:12.794 "product_name": "Malloc disk", 00:11:12.794 "block_size": 512, 00:11:12.794 "num_blocks": 65536, 00:11:12.794 "uuid": "12d1ccaf-bfa9-4adb-b070-a3c37903abe4", 00:11:12.794 "assigned_rate_limits": { 00:11:12.794 "rw_ios_per_sec": 0, 00:11:12.794 "rw_mbytes_per_sec": 0, 00:11:12.794 "r_mbytes_per_sec": 0, 00:11:12.794 "w_mbytes_per_sec": 0 00:11:12.794 }, 00:11:12.794 "claimed": false, 00:11:12.794 "zoned": false, 00:11:12.794 "supported_io_types": { 00:11:12.794 "read": true, 00:11:12.794 "write": true, 00:11:12.794 "unmap": true, 00:11:12.794 "flush": true, 00:11:12.794 "reset": true, 00:11:12.794 "nvme_admin": false, 00:11:12.794 "nvme_io": false, 00:11:12.794 "nvme_io_md": false, 00:11:12.794 "write_zeroes": true, 00:11:12.794 "zcopy": true, 00:11:12.794 "get_zone_info": false, 00:11:12.794 "zone_management": false, 00:11:12.794 "zone_append": false, 00:11:12.794 "compare": false, 00:11:12.794 "compare_and_write": false, 00:11:12.794 "abort": true, 00:11:12.794 "seek_hole": false, 00:11:12.794 "seek_data": false, 00:11:12.794 "copy": true, 00:11:12.794 "nvme_iov_md": false 00:11:12.794 }, 00:11:12.794 "memory_domains": [ 00:11:12.794 { 00:11:12.794 "dma_device_id": "system", 00:11:12.794 "dma_device_type": 1 00:11:12.794 }, 00:11:12.794 { 00:11:12.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.794 "dma_device_type": 2 00:11:12.794 } 00:11:12.794 ], 00:11:12.794 "driver_specific": {} 00:11:12.794 } 00:11:12.794 ] 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.794 [2024-11-20 14:21:51.675799] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:12.794 [2024-11-20 14:21:51.676065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:12.794 [2024-11-20 14:21:51.676197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.794 [2024-11-20 14:21:51.678657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.794 "name": "Existed_Raid", 00:11:12.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.794 "strip_size_kb": 0, 00:11:12.794 "state": "configuring", 00:11:12.794 "raid_level": "raid1", 00:11:12.794 "superblock": false, 00:11:12.794 "num_base_bdevs": 3, 00:11:12.794 "num_base_bdevs_discovered": 2, 00:11:12.794 "num_base_bdevs_operational": 3, 00:11:12.794 "base_bdevs_list": [ 00:11:12.794 { 00:11:12.794 "name": "BaseBdev1", 00:11:12.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.794 "is_configured": false, 00:11:12.794 "data_offset": 0, 00:11:12.794 "data_size": 0 00:11:12.794 }, 00:11:12.794 { 00:11:12.794 "name": "BaseBdev2", 00:11:12.794 "uuid": "fac3ba0f-62ca-41f1-938b-2c43fee2f47b", 00:11:12.794 "is_configured": true, 00:11:12.794 "data_offset": 0, 00:11:12.794 "data_size": 65536 00:11:12.794 }, 00:11:12.794 { 00:11:12.794 "name": "BaseBdev3", 00:11:12.794 "uuid": "12d1ccaf-bfa9-4adb-b070-a3c37903abe4", 00:11:12.794 "is_configured": true, 00:11:12.794 "data_offset": 0, 00:11:12.794 "data_size": 65536 00:11:12.794 } 00:11:12.794 ] 00:11:12.794 }' 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.794 14:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.361 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:13.361 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.361 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.361 [2024-11-20 14:21:52.184047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:13.361 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.361 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:13.361 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.361 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.361 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.361 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.361 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.361 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.361 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.361 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.361 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.361 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.361 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.361 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.361 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.361 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.361 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.361 "name": "Existed_Raid", 00:11:13.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.361 "strip_size_kb": 0, 00:11:13.361 "state": "configuring", 00:11:13.361 "raid_level": "raid1", 00:11:13.361 "superblock": false, 00:11:13.361 "num_base_bdevs": 3, 00:11:13.361 "num_base_bdevs_discovered": 1, 00:11:13.361 "num_base_bdevs_operational": 3, 00:11:13.361 "base_bdevs_list": [ 00:11:13.361 { 00:11:13.361 "name": "BaseBdev1", 00:11:13.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.361 "is_configured": false, 00:11:13.361 "data_offset": 0, 00:11:13.361 "data_size": 0 00:11:13.361 }, 00:11:13.361 { 00:11:13.361 "name": null, 00:11:13.361 "uuid": "fac3ba0f-62ca-41f1-938b-2c43fee2f47b", 00:11:13.361 "is_configured": false, 00:11:13.361 "data_offset": 0, 00:11:13.361 "data_size": 65536 00:11:13.361 }, 00:11:13.361 { 00:11:13.361 "name": "BaseBdev3", 00:11:13.361 "uuid": "12d1ccaf-bfa9-4adb-b070-a3c37903abe4", 00:11:13.361 "is_configured": true, 00:11:13.361 "data_offset": 0, 00:11:13.361 "data_size": 65536 00:11:13.361 } 00:11:13.361 ] 00:11:13.361 }' 00:11:13.361 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.361 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.928 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.928 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.928 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:13.928 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.928 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.928 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:13.928 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:13.928 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.928 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.928 [2024-11-20 14:21:52.818707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.928 BaseBdev1 00:11:13.928 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.928 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:13.928 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:13.928 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:13.928 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:13.928 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:13.928 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:13.928 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.929 [ 00:11:13.929 { 00:11:13.929 "name": "BaseBdev1", 00:11:13.929 "aliases": [ 00:11:13.929 "38cbe426-0957-44fc-ac27-3edb0ab73442" 00:11:13.929 ], 00:11:13.929 "product_name": "Malloc disk", 00:11:13.929 "block_size": 512, 00:11:13.929 "num_blocks": 65536, 00:11:13.929 "uuid": "38cbe426-0957-44fc-ac27-3edb0ab73442", 00:11:13.929 "assigned_rate_limits": { 00:11:13.929 "rw_ios_per_sec": 0, 00:11:13.929 "rw_mbytes_per_sec": 0, 00:11:13.929 "r_mbytes_per_sec": 0, 00:11:13.929 "w_mbytes_per_sec": 0 00:11:13.929 }, 00:11:13.929 "claimed": true, 00:11:13.929 "claim_type": "exclusive_write", 00:11:13.929 "zoned": false, 00:11:13.929 "supported_io_types": { 00:11:13.929 "read": true, 00:11:13.929 "write": true, 00:11:13.929 "unmap": true, 00:11:13.929 "flush": true, 00:11:13.929 "reset": true, 00:11:13.929 "nvme_admin": false, 00:11:13.929 "nvme_io": false, 00:11:13.929 "nvme_io_md": false, 00:11:13.929 "write_zeroes": true, 00:11:13.929 "zcopy": true, 00:11:13.929 "get_zone_info": false, 00:11:13.929 "zone_management": false, 00:11:13.929 "zone_append": false, 00:11:13.929 "compare": false, 00:11:13.929 "compare_and_write": false, 00:11:13.929 "abort": true, 00:11:13.929 "seek_hole": false, 00:11:13.929 "seek_data": false, 00:11:13.929 "copy": true, 00:11:13.929 "nvme_iov_md": false 00:11:13.929 }, 00:11:13.929 "memory_domains": [ 00:11:13.929 { 00:11:13.929 "dma_device_id": "system", 00:11:13.929 "dma_device_type": 1 00:11:13.929 }, 00:11:13.929 { 00:11:13.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.929 "dma_device_type": 2 00:11:13.929 } 00:11:13.929 ], 00:11:13.929 "driver_specific": {} 00:11:13.929 } 00:11:13.929 ] 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.929 "name": "Existed_Raid", 00:11:13.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.929 "strip_size_kb": 0, 00:11:13.929 "state": "configuring", 00:11:13.929 "raid_level": "raid1", 00:11:13.929 "superblock": false, 00:11:13.929 "num_base_bdevs": 3, 00:11:13.929 "num_base_bdevs_discovered": 2, 00:11:13.929 "num_base_bdevs_operational": 3, 00:11:13.929 "base_bdevs_list": [ 00:11:13.929 { 00:11:13.929 "name": "BaseBdev1", 00:11:13.929 "uuid": "38cbe426-0957-44fc-ac27-3edb0ab73442", 00:11:13.929 "is_configured": true, 00:11:13.929 "data_offset": 0, 00:11:13.929 "data_size": 65536 00:11:13.929 }, 00:11:13.929 { 00:11:13.929 "name": null, 00:11:13.929 "uuid": "fac3ba0f-62ca-41f1-938b-2c43fee2f47b", 00:11:13.929 "is_configured": false, 00:11:13.929 "data_offset": 0, 00:11:13.929 "data_size": 65536 00:11:13.929 }, 00:11:13.929 { 00:11:13.929 "name": "BaseBdev3", 00:11:13.929 "uuid": "12d1ccaf-bfa9-4adb-b070-a3c37903abe4", 00:11:13.929 "is_configured": true, 00:11:13.929 "data_offset": 0, 00:11:13.929 "data_size": 65536 00:11:13.929 } 00:11:13.929 ] 00:11:13.929 }' 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.929 14:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.497 [2024-11-20 14:21:53.422909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.497 14:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.756 14:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.756 "name": "Existed_Raid", 00:11:14.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.756 "strip_size_kb": 0, 00:11:14.756 "state": "configuring", 00:11:14.756 "raid_level": "raid1", 00:11:14.756 "superblock": false, 00:11:14.756 "num_base_bdevs": 3, 00:11:14.756 "num_base_bdevs_discovered": 1, 00:11:14.756 "num_base_bdevs_operational": 3, 00:11:14.756 "base_bdevs_list": [ 00:11:14.756 { 00:11:14.756 "name": "BaseBdev1", 00:11:14.756 "uuid": "38cbe426-0957-44fc-ac27-3edb0ab73442", 00:11:14.756 "is_configured": true, 00:11:14.756 "data_offset": 0, 00:11:14.756 "data_size": 65536 00:11:14.756 }, 00:11:14.756 { 00:11:14.756 "name": null, 00:11:14.756 "uuid": "fac3ba0f-62ca-41f1-938b-2c43fee2f47b", 00:11:14.756 "is_configured": false, 00:11:14.756 "data_offset": 0, 00:11:14.756 "data_size": 65536 00:11:14.756 }, 00:11:14.756 { 00:11:14.756 "name": null, 00:11:14.756 "uuid": "12d1ccaf-bfa9-4adb-b070-a3c37903abe4", 00:11:14.756 "is_configured": false, 00:11:14.756 "data_offset": 0, 00:11:14.756 "data_size": 65536 00:11:14.756 } 00:11:14.756 ] 00:11:14.756 }' 00:11:14.756 14:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.757 14:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.325 [2024-11-20 14:21:54.055155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.325 "name": "Existed_Raid", 00:11:15.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.325 "strip_size_kb": 0, 00:11:15.325 "state": "configuring", 00:11:15.325 "raid_level": "raid1", 00:11:15.325 "superblock": false, 00:11:15.325 "num_base_bdevs": 3, 00:11:15.325 "num_base_bdevs_discovered": 2, 00:11:15.325 "num_base_bdevs_operational": 3, 00:11:15.325 "base_bdevs_list": [ 00:11:15.325 { 00:11:15.325 "name": "BaseBdev1", 00:11:15.325 "uuid": "38cbe426-0957-44fc-ac27-3edb0ab73442", 00:11:15.325 "is_configured": true, 00:11:15.325 "data_offset": 0, 00:11:15.325 "data_size": 65536 00:11:15.325 }, 00:11:15.325 { 00:11:15.325 "name": null, 00:11:15.325 "uuid": "fac3ba0f-62ca-41f1-938b-2c43fee2f47b", 00:11:15.325 "is_configured": false, 00:11:15.325 "data_offset": 0, 00:11:15.325 "data_size": 65536 00:11:15.325 }, 00:11:15.325 { 00:11:15.325 "name": "BaseBdev3", 00:11:15.325 "uuid": "12d1ccaf-bfa9-4adb-b070-a3c37903abe4", 00:11:15.325 "is_configured": true, 00:11:15.325 "data_offset": 0, 00:11:15.325 "data_size": 65536 00:11:15.325 } 00:11:15.325 ] 00:11:15.325 }' 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.325 14:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.893 [2024-11-20 14:21:54.647337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.893 "name": "Existed_Raid", 00:11:15.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.893 "strip_size_kb": 0, 00:11:15.893 "state": "configuring", 00:11:15.893 "raid_level": "raid1", 00:11:15.893 "superblock": false, 00:11:15.893 "num_base_bdevs": 3, 00:11:15.893 "num_base_bdevs_discovered": 1, 00:11:15.893 "num_base_bdevs_operational": 3, 00:11:15.893 "base_bdevs_list": [ 00:11:15.893 { 00:11:15.893 "name": null, 00:11:15.893 "uuid": "38cbe426-0957-44fc-ac27-3edb0ab73442", 00:11:15.893 "is_configured": false, 00:11:15.893 "data_offset": 0, 00:11:15.893 "data_size": 65536 00:11:15.893 }, 00:11:15.893 { 00:11:15.893 "name": null, 00:11:15.893 "uuid": "fac3ba0f-62ca-41f1-938b-2c43fee2f47b", 00:11:15.893 "is_configured": false, 00:11:15.893 "data_offset": 0, 00:11:15.893 "data_size": 65536 00:11:15.893 }, 00:11:15.893 { 00:11:15.893 "name": "BaseBdev3", 00:11:15.893 "uuid": "12d1ccaf-bfa9-4adb-b070-a3c37903abe4", 00:11:15.893 "is_configured": true, 00:11:15.893 "data_offset": 0, 00:11:15.893 "data_size": 65536 00:11:15.893 } 00:11:15.893 ] 00:11:15.893 }' 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.893 14:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.461 [2024-11-20 14:21:55.302639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.461 "name": "Existed_Raid", 00:11:16.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.461 "strip_size_kb": 0, 00:11:16.461 "state": "configuring", 00:11:16.461 "raid_level": "raid1", 00:11:16.461 "superblock": false, 00:11:16.461 "num_base_bdevs": 3, 00:11:16.461 "num_base_bdevs_discovered": 2, 00:11:16.461 "num_base_bdevs_operational": 3, 00:11:16.461 "base_bdevs_list": [ 00:11:16.461 { 00:11:16.461 "name": null, 00:11:16.461 "uuid": "38cbe426-0957-44fc-ac27-3edb0ab73442", 00:11:16.461 "is_configured": false, 00:11:16.461 "data_offset": 0, 00:11:16.461 "data_size": 65536 00:11:16.461 }, 00:11:16.461 { 00:11:16.461 "name": "BaseBdev2", 00:11:16.461 "uuid": "fac3ba0f-62ca-41f1-938b-2c43fee2f47b", 00:11:16.461 "is_configured": true, 00:11:16.461 "data_offset": 0, 00:11:16.461 "data_size": 65536 00:11:16.461 }, 00:11:16.461 { 00:11:16.461 "name": "BaseBdev3", 00:11:16.461 "uuid": "12d1ccaf-bfa9-4adb-b070-a3c37903abe4", 00:11:16.461 "is_configured": true, 00:11:16.461 "data_offset": 0, 00:11:16.461 "data_size": 65536 00:11:16.461 } 00:11:16.461 ] 00:11:16.461 }' 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.461 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 38cbe426-0957-44fc-ac27-3edb0ab73442 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.030 [2024-11-20 14:21:55.981139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:17.030 [2024-11-20 14:21:55.981217] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:17.030 [2024-11-20 14:21:55.981230] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:17.030 [2024-11-20 14:21:55.981564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:17.030 [2024-11-20 14:21:55.981748] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:17.030 [2024-11-20 14:21:55.981770] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:17.030 [2024-11-20 14:21:55.982137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.030 NewBaseBdev 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.030 14:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.030 [ 00:11:17.030 { 00:11:17.030 "name": "NewBaseBdev", 00:11:17.030 "aliases": [ 00:11:17.030 "38cbe426-0957-44fc-ac27-3edb0ab73442" 00:11:17.030 ], 00:11:17.030 "product_name": "Malloc disk", 00:11:17.030 "block_size": 512, 00:11:17.030 "num_blocks": 65536, 00:11:17.030 "uuid": "38cbe426-0957-44fc-ac27-3edb0ab73442", 00:11:17.030 "assigned_rate_limits": { 00:11:17.030 "rw_ios_per_sec": 0, 00:11:17.030 "rw_mbytes_per_sec": 0, 00:11:17.030 "r_mbytes_per_sec": 0, 00:11:17.030 "w_mbytes_per_sec": 0 00:11:17.030 }, 00:11:17.030 "claimed": true, 00:11:17.030 "claim_type": "exclusive_write", 00:11:17.030 "zoned": false, 00:11:17.030 "supported_io_types": { 00:11:17.030 "read": true, 00:11:17.030 "write": true, 00:11:17.030 "unmap": true, 00:11:17.030 "flush": true, 00:11:17.030 "reset": true, 00:11:17.030 "nvme_admin": false, 00:11:17.030 "nvme_io": false, 00:11:17.030 "nvme_io_md": false, 00:11:17.030 "write_zeroes": true, 00:11:17.030 "zcopy": true, 00:11:17.030 "get_zone_info": false, 00:11:17.030 "zone_management": false, 00:11:17.030 "zone_append": false, 00:11:17.030 "compare": false, 00:11:17.030 "compare_and_write": false, 00:11:17.030 "abort": true, 00:11:17.030 "seek_hole": false, 00:11:17.030 "seek_data": false, 00:11:17.030 "copy": true, 00:11:17.290 "nvme_iov_md": false 00:11:17.290 }, 00:11:17.290 "memory_domains": [ 00:11:17.290 { 00:11:17.290 "dma_device_id": "system", 00:11:17.290 "dma_device_type": 1 00:11:17.290 }, 00:11:17.290 { 00:11:17.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.290 "dma_device_type": 2 00:11:17.290 } 00:11:17.290 ], 00:11:17.290 "driver_specific": {} 00:11:17.290 } 00:11:17.290 ] 00:11:17.290 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.290 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:17.290 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:17.290 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.290 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.290 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.290 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.290 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.290 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.290 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.290 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.290 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.290 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.290 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.290 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.290 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.290 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.290 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.290 "name": "Existed_Raid", 00:11:17.290 "uuid": "34aee374-d588-4f21-98f2-8adfb4c1fbc1", 00:11:17.290 "strip_size_kb": 0, 00:11:17.290 "state": "online", 00:11:17.290 "raid_level": "raid1", 00:11:17.290 "superblock": false, 00:11:17.290 "num_base_bdevs": 3, 00:11:17.290 "num_base_bdevs_discovered": 3, 00:11:17.290 "num_base_bdevs_operational": 3, 00:11:17.290 "base_bdevs_list": [ 00:11:17.290 { 00:11:17.290 "name": "NewBaseBdev", 00:11:17.290 "uuid": "38cbe426-0957-44fc-ac27-3edb0ab73442", 00:11:17.290 "is_configured": true, 00:11:17.290 "data_offset": 0, 00:11:17.290 "data_size": 65536 00:11:17.290 }, 00:11:17.290 { 00:11:17.290 "name": "BaseBdev2", 00:11:17.290 "uuid": "fac3ba0f-62ca-41f1-938b-2c43fee2f47b", 00:11:17.290 "is_configured": true, 00:11:17.290 "data_offset": 0, 00:11:17.290 "data_size": 65536 00:11:17.290 }, 00:11:17.290 { 00:11:17.290 "name": "BaseBdev3", 00:11:17.290 "uuid": "12d1ccaf-bfa9-4adb-b070-a3c37903abe4", 00:11:17.290 "is_configured": true, 00:11:17.290 "data_offset": 0, 00:11:17.290 "data_size": 65536 00:11:17.290 } 00:11:17.290 ] 00:11:17.290 }' 00:11:17.290 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.290 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.549 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:17.549 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:17.549 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:17.549 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:17.549 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:17.549 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:17.549 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:17.549 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.549 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:17.549 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.549 [2024-11-20 14:21:56.517695] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.808 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.808 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:17.808 "name": "Existed_Raid", 00:11:17.808 "aliases": [ 00:11:17.808 "34aee374-d588-4f21-98f2-8adfb4c1fbc1" 00:11:17.808 ], 00:11:17.808 "product_name": "Raid Volume", 00:11:17.808 "block_size": 512, 00:11:17.808 "num_blocks": 65536, 00:11:17.808 "uuid": "34aee374-d588-4f21-98f2-8adfb4c1fbc1", 00:11:17.808 "assigned_rate_limits": { 00:11:17.808 "rw_ios_per_sec": 0, 00:11:17.808 "rw_mbytes_per_sec": 0, 00:11:17.808 "r_mbytes_per_sec": 0, 00:11:17.808 "w_mbytes_per_sec": 0 00:11:17.808 }, 00:11:17.808 "claimed": false, 00:11:17.808 "zoned": false, 00:11:17.808 "supported_io_types": { 00:11:17.808 "read": true, 00:11:17.808 "write": true, 00:11:17.808 "unmap": false, 00:11:17.808 "flush": false, 00:11:17.808 "reset": true, 00:11:17.808 "nvme_admin": false, 00:11:17.808 "nvme_io": false, 00:11:17.808 "nvme_io_md": false, 00:11:17.808 "write_zeroes": true, 00:11:17.808 "zcopy": false, 00:11:17.808 "get_zone_info": false, 00:11:17.808 "zone_management": false, 00:11:17.808 "zone_append": false, 00:11:17.808 "compare": false, 00:11:17.808 "compare_and_write": false, 00:11:17.808 "abort": false, 00:11:17.808 "seek_hole": false, 00:11:17.808 "seek_data": false, 00:11:17.808 "copy": false, 00:11:17.808 "nvme_iov_md": false 00:11:17.808 }, 00:11:17.808 "memory_domains": [ 00:11:17.808 { 00:11:17.808 "dma_device_id": "system", 00:11:17.808 "dma_device_type": 1 00:11:17.808 }, 00:11:17.808 { 00:11:17.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.808 "dma_device_type": 2 00:11:17.808 }, 00:11:17.808 { 00:11:17.808 "dma_device_id": "system", 00:11:17.808 "dma_device_type": 1 00:11:17.808 }, 00:11:17.808 { 00:11:17.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.808 "dma_device_type": 2 00:11:17.808 }, 00:11:17.808 { 00:11:17.808 "dma_device_id": "system", 00:11:17.808 "dma_device_type": 1 00:11:17.808 }, 00:11:17.808 { 00:11:17.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.808 "dma_device_type": 2 00:11:17.808 } 00:11:17.808 ], 00:11:17.808 "driver_specific": { 00:11:17.808 "raid": { 00:11:17.808 "uuid": "34aee374-d588-4f21-98f2-8adfb4c1fbc1", 00:11:17.808 "strip_size_kb": 0, 00:11:17.808 "state": "online", 00:11:17.808 "raid_level": "raid1", 00:11:17.808 "superblock": false, 00:11:17.808 "num_base_bdevs": 3, 00:11:17.808 "num_base_bdevs_discovered": 3, 00:11:17.808 "num_base_bdevs_operational": 3, 00:11:17.808 "base_bdevs_list": [ 00:11:17.808 { 00:11:17.808 "name": "NewBaseBdev", 00:11:17.808 "uuid": "38cbe426-0957-44fc-ac27-3edb0ab73442", 00:11:17.808 "is_configured": true, 00:11:17.808 "data_offset": 0, 00:11:17.808 "data_size": 65536 00:11:17.808 }, 00:11:17.808 { 00:11:17.808 "name": "BaseBdev2", 00:11:17.808 "uuid": "fac3ba0f-62ca-41f1-938b-2c43fee2f47b", 00:11:17.808 "is_configured": true, 00:11:17.808 "data_offset": 0, 00:11:17.808 "data_size": 65536 00:11:17.808 }, 00:11:17.808 { 00:11:17.808 "name": "BaseBdev3", 00:11:17.808 "uuid": "12d1ccaf-bfa9-4adb-b070-a3c37903abe4", 00:11:17.808 "is_configured": true, 00:11:17.808 "data_offset": 0, 00:11:17.808 "data_size": 65536 00:11:17.808 } 00:11:17.808 ] 00:11:17.808 } 00:11:17.808 } 00:11:17.808 }' 00:11:17.808 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:17.808 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:17.808 BaseBdev2 00:11:17.808 BaseBdev3' 00:11:17.808 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.808 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:17.808 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.809 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.809 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:17.809 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.809 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.809 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.809 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.809 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.809 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.809 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:17.809 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.809 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.809 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.809 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.067 [2024-11-20 14:21:56.913457] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:18.067 [2024-11-20 14:21:56.913503] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:18.067 [2024-11-20 14:21:56.913605] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.067 [2024-11-20 14:21:56.913968] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.067 [2024-11-20 14:21:56.914011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67458 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67458 ']' 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67458 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67458 00:11:18.067 killing process with pid 67458 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67458' 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67458 00:11:18.067 [2024-11-20 14:21:56.955070] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:18.067 14:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67458 00:11:18.325 [2024-11-20 14:21:57.226345] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:19.720 00:11:19.720 real 0m12.008s 00:11:19.720 user 0m19.992s 00:11:19.720 sys 0m1.616s 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.720 ************************************ 00:11:19.720 END TEST raid_state_function_test 00:11:19.720 ************************************ 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.720 14:21:58 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:11:19.720 14:21:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:19.720 14:21:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.720 14:21:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:19.720 ************************************ 00:11:19.720 START TEST raid_state_function_test_sb 00:11:19.720 ************************************ 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:19.720 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:19.720 Process raid pid: 68096 00:11:19.721 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:19.721 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68096 00:11:19.721 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68096' 00:11:19.721 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68096 00:11:19.721 14:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:19.721 14:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68096 ']' 00:11:19.721 14:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.721 14:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:19.721 14:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.721 14:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:19.721 14:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.721 [2024-11-20 14:21:58.440965] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:11:19.721 [2024-11-20 14:21:58.441173] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.721 [2024-11-20 14:21:58.634286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.979 [2024-11-20 14:21:58.754731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.237 [2024-11-20 14:21:58.960820] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.237 [2024-11-20 14:21:58.960864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.495 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:20.495 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:20.495 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:20.495 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.495 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.495 [2024-11-20 14:21:59.363546] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:20.495 [2024-11-20 14:21:59.363615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:20.495 [2024-11-20 14:21:59.363634] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:20.495 [2024-11-20 14:21:59.363651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:20.495 [2024-11-20 14:21:59.363661] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:20.495 [2024-11-20 14:21:59.363675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:20.495 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.495 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:20.495 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.495 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.495 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.495 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.495 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:20.495 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.495 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.495 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.495 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.495 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.495 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.495 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.495 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.495 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.495 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.495 "name": "Existed_Raid", 00:11:20.495 "uuid": "4d273fd2-ffb6-4226-8bb4-2c9d66b4d26e", 00:11:20.495 "strip_size_kb": 0, 00:11:20.495 "state": "configuring", 00:11:20.495 "raid_level": "raid1", 00:11:20.495 "superblock": true, 00:11:20.495 "num_base_bdevs": 3, 00:11:20.495 "num_base_bdevs_discovered": 0, 00:11:20.495 "num_base_bdevs_operational": 3, 00:11:20.495 "base_bdevs_list": [ 00:11:20.495 { 00:11:20.495 "name": "BaseBdev1", 00:11:20.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.495 "is_configured": false, 00:11:20.495 "data_offset": 0, 00:11:20.495 "data_size": 0 00:11:20.495 }, 00:11:20.495 { 00:11:20.495 "name": "BaseBdev2", 00:11:20.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.495 "is_configured": false, 00:11:20.495 "data_offset": 0, 00:11:20.495 "data_size": 0 00:11:20.495 }, 00:11:20.495 { 00:11:20.495 "name": "BaseBdev3", 00:11:20.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.495 "is_configured": false, 00:11:20.495 "data_offset": 0, 00:11:20.495 "data_size": 0 00:11:20.495 } 00:11:20.495 ] 00:11:20.495 }' 00:11:20.495 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.495 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.058 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:21.058 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.058 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.058 [2024-11-20 14:21:59.871624] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:21.058 [2024-11-20 14:21:59.871669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:21.058 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.058 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:21.058 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.058 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.058 [2024-11-20 14:21:59.879607] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:21.058 [2024-11-20 14:21:59.879665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:21.058 [2024-11-20 14:21:59.879681] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.058 [2024-11-20 14:21:59.879697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.058 [2024-11-20 14:21:59.879707] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:21.058 [2024-11-20 14:21:59.879722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:21.058 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.058 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.059 [2024-11-20 14:21:59.924626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.059 BaseBdev1 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.059 [ 00:11:21.059 { 00:11:21.059 "name": "BaseBdev1", 00:11:21.059 "aliases": [ 00:11:21.059 "becfa7e7-61e4-453b-ba80-acb8b7c4e6ba" 00:11:21.059 ], 00:11:21.059 "product_name": "Malloc disk", 00:11:21.059 "block_size": 512, 00:11:21.059 "num_blocks": 65536, 00:11:21.059 "uuid": "becfa7e7-61e4-453b-ba80-acb8b7c4e6ba", 00:11:21.059 "assigned_rate_limits": { 00:11:21.059 "rw_ios_per_sec": 0, 00:11:21.059 "rw_mbytes_per_sec": 0, 00:11:21.059 "r_mbytes_per_sec": 0, 00:11:21.059 "w_mbytes_per_sec": 0 00:11:21.059 }, 00:11:21.059 "claimed": true, 00:11:21.059 "claim_type": "exclusive_write", 00:11:21.059 "zoned": false, 00:11:21.059 "supported_io_types": { 00:11:21.059 "read": true, 00:11:21.059 "write": true, 00:11:21.059 "unmap": true, 00:11:21.059 "flush": true, 00:11:21.059 "reset": true, 00:11:21.059 "nvme_admin": false, 00:11:21.059 "nvme_io": false, 00:11:21.059 "nvme_io_md": false, 00:11:21.059 "write_zeroes": true, 00:11:21.059 "zcopy": true, 00:11:21.059 "get_zone_info": false, 00:11:21.059 "zone_management": false, 00:11:21.059 "zone_append": false, 00:11:21.059 "compare": false, 00:11:21.059 "compare_and_write": false, 00:11:21.059 "abort": true, 00:11:21.059 "seek_hole": false, 00:11:21.059 "seek_data": false, 00:11:21.059 "copy": true, 00:11:21.059 "nvme_iov_md": false 00:11:21.059 }, 00:11:21.059 "memory_domains": [ 00:11:21.059 { 00:11:21.059 "dma_device_id": "system", 00:11:21.059 "dma_device_type": 1 00:11:21.059 }, 00:11:21.059 { 00:11:21.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.059 "dma_device_type": 2 00:11:21.059 } 00:11:21.059 ], 00:11:21.059 "driver_specific": {} 00:11:21.059 } 00:11:21.059 ] 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.059 14:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.059 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.059 "name": "Existed_Raid", 00:11:21.059 "uuid": "843b3a37-21d7-4405-b246-d5253db631a0", 00:11:21.059 "strip_size_kb": 0, 00:11:21.059 "state": "configuring", 00:11:21.059 "raid_level": "raid1", 00:11:21.059 "superblock": true, 00:11:21.059 "num_base_bdevs": 3, 00:11:21.059 "num_base_bdevs_discovered": 1, 00:11:21.059 "num_base_bdevs_operational": 3, 00:11:21.059 "base_bdevs_list": [ 00:11:21.059 { 00:11:21.059 "name": "BaseBdev1", 00:11:21.059 "uuid": "becfa7e7-61e4-453b-ba80-acb8b7c4e6ba", 00:11:21.059 "is_configured": true, 00:11:21.059 "data_offset": 2048, 00:11:21.059 "data_size": 63488 00:11:21.059 }, 00:11:21.059 { 00:11:21.059 "name": "BaseBdev2", 00:11:21.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.059 "is_configured": false, 00:11:21.059 "data_offset": 0, 00:11:21.059 "data_size": 0 00:11:21.059 }, 00:11:21.059 { 00:11:21.059 "name": "BaseBdev3", 00:11:21.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.059 "is_configured": false, 00:11:21.059 "data_offset": 0, 00:11:21.059 "data_size": 0 00:11:21.059 } 00:11:21.059 ] 00:11:21.059 }' 00:11:21.059 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.059 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.623 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:21.623 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.623 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.623 [2024-11-20 14:22:00.400821] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:21.623 [2024-11-20 14:22:00.400885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:21.623 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.623 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:21.623 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.623 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.623 [2024-11-20 14:22:00.408860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.623 [2024-11-20 14:22:00.411487] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.623 [2024-11-20 14:22:00.411676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.623 [2024-11-20 14:22:00.411813] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:21.623 [2024-11-20 14:22:00.411945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:21.623 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.623 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:21.623 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:21.623 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:21.623 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.623 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.623 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.624 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.624 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:21.624 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.624 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.624 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.624 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.624 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.624 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.624 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.624 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.624 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.624 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.624 "name": "Existed_Raid", 00:11:21.624 "uuid": "be6a6dc6-1c34-4f7a-a5e5-e54437dd53a6", 00:11:21.624 "strip_size_kb": 0, 00:11:21.624 "state": "configuring", 00:11:21.624 "raid_level": "raid1", 00:11:21.624 "superblock": true, 00:11:21.624 "num_base_bdevs": 3, 00:11:21.624 "num_base_bdevs_discovered": 1, 00:11:21.624 "num_base_bdevs_operational": 3, 00:11:21.624 "base_bdevs_list": [ 00:11:21.624 { 00:11:21.624 "name": "BaseBdev1", 00:11:21.624 "uuid": "becfa7e7-61e4-453b-ba80-acb8b7c4e6ba", 00:11:21.624 "is_configured": true, 00:11:21.624 "data_offset": 2048, 00:11:21.624 "data_size": 63488 00:11:21.624 }, 00:11:21.624 { 00:11:21.624 "name": "BaseBdev2", 00:11:21.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.624 "is_configured": false, 00:11:21.624 "data_offset": 0, 00:11:21.624 "data_size": 0 00:11:21.624 }, 00:11:21.624 { 00:11:21.624 "name": "BaseBdev3", 00:11:21.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.624 "is_configured": false, 00:11:21.624 "data_offset": 0, 00:11:21.624 "data_size": 0 00:11:21.624 } 00:11:21.624 ] 00:11:21.624 }' 00:11:21.624 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.624 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.189 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:22.189 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.189 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.189 [2024-11-20 14:22:00.927572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.189 BaseBdev2 00:11:22.189 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.189 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:22.189 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:22.189 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.189 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:22.189 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.189 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.189 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.189 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.189 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.189 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.189 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:22.189 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.189 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.189 [ 00:11:22.189 { 00:11:22.189 "name": "BaseBdev2", 00:11:22.189 "aliases": [ 00:11:22.189 "fe36b4ca-b65e-47a5-9252-ecfee34519b8" 00:11:22.189 ], 00:11:22.189 "product_name": "Malloc disk", 00:11:22.189 "block_size": 512, 00:11:22.189 "num_blocks": 65536, 00:11:22.189 "uuid": "fe36b4ca-b65e-47a5-9252-ecfee34519b8", 00:11:22.189 "assigned_rate_limits": { 00:11:22.189 "rw_ios_per_sec": 0, 00:11:22.189 "rw_mbytes_per_sec": 0, 00:11:22.189 "r_mbytes_per_sec": 0, 00:11:22.189 "w_mbytes_per_sec": 0 00:11:22.189 }, 00:11:22.190 "claimed": true, 00:11:22.190 "claim_type": "exclusive_write", 00:11:22.190 "zoned": false, 00:11:22.190 "supported_io_types": { 00:11:22.190 "read": true, 00:11:22.190 "write": true, 00:11:22.190 "unmap": true, 00:11:22.190 "flush": true, 00:11:22.190 "reset": true, 00:11:22.190 "nvme_admin": false, 00:11:22.190 "nvme_io": false, 00:11:22.190 "nvme_io_md": false, 00:11:22.190 "write_zeroes": true, 00:11:22.190 "zcopy": true, 00:11:22.190 "get_zone_info": false, 00:11:22.190 "zone_management": false, 00:11:22.190 "zone_append": false, 00:11:22.190 "compare": false, 00:11:22.190 "compare_and_write": false, 00:11:22.190 "abort": true, 00:11:22.190 "seek_hole": false, 00:11:22.190 "seek_data": false, 00:11:22.190 "copy": true, 00:11:22.190 "nvme_iov_md": false 00:11:22.190 }, 00:11:22.190 "memory_domains": [ 00:11:22.190 { 00:11:22.190 "dma_device_id": "system", 00:11:22.190 "dma_device_type": 1 00:11:22.190 }, 00:11:22.190 { 00:11:22.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.190 "dma_device_type": 2 00:11:22.190 } 00:11:22.190 ], 00:11:22.190 "driver_specific": {} 00:11:22.190 } 00:11:22.190 ] 00:11:22.190 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.190 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:22.190 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:22.190 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:22.190 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:22.190 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.190 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.190 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.190 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.190 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.190 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.190 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.190 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.190 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.190 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.190 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.190 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.190 14:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.190 14:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.190 14:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.190 "name": "Existed_Raid", 00:11:22.190 "uuid": "be6a6dc6-1c34-4f7a-a5e5-e54437dd53a6", 00:11:22.190 "strip_size_kb": 0, 00:11:22.190 "state": "configuring", 00:11:22.190 "raid_level": "raid1", 00:11:22.190 "superblock": true, 00:11:22.190 "num_base_bdevs": 3, 00:11:22.190 "num_base_bdevs_discovered": 2, 00:11:22.190 "num_base_bdevs_operational": 3, 00:11:22.190 "base_bdevs_list": [ 00:11:22.190 { 00:11:22.190 "name": "BaseBdev1", 00:11:22.190 "uuid": "becfa7e7-61e4-453b-ba80-acb8b7c4e6ba", 00:11:22.190 "is_configured": true, 00:11:22.190 "data_offset": 2048, 00:11:22.190 "data_size": 63488 00:11:22.190 }, 00:11:22.190 { 00:11:22.190 "name": "BaseBdev2", 00:11:22.190 "uuid": "fe36b4ca-b65e-47a5-9252-ecfee34519b8", 00:11:22.190 "is_configured": true, 00:11:22.190 "data_offset": 2048, 00:11:22.190 "data_size": 63488 00:11:22.190 }, 00:11:22.190 { 00:11:22.190 "name": "BaseBdev3", 00:11:22.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.190 "is_configured": false, 00:11:22.190 "data_offset": 0, 00:11:22.190 "data_size": 0 00:11:22.190 } 00:11:22.190 ] 00:11:22.190 }' 00:11:22.190 14:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.190 14:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.757 [2024-11-20 14:22:01.521765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:22.757 [2024-11-20 14:22:01.522601] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:22.757 [2024-11-20 14:22:01.522640] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:22.757 BaseBdev3 00:11:22.757 [2024-11-20 14:22:01.523015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:22.757 [2024-11-20 14:22:01.523247] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:22.757 [2024-11-20 14:22:01.523272] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:22.757 [2024-11-20 14:22:01.523456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.757 [ 00:11:22.757 { 00:11:22.757 "name": "BaseBdev3", 00:11:22.757 "aliases": [ 00:11:22.757 "7e0a0308-5da6-4334-9308-e476c448ea91" 00:11:22.757 ], 00:11:22.757 "product_name": "Malloc disk", 00:11:22.757 "block_size": 512, 00:11:22.757 "num_blocks": 65536, 00:11:22.757 "uuid": "7e0a0308-5da6-4334-9308-e476c448ea91", 00:11:22.757 "assigned_rate_limits": { 00:11:22.757 "rw_ios_per_sec": 0, 00:11:22.757 "rw_mbytes_per_sec": 0, 00:11:22.757 "r_mbytes_per_sec": 0, 00:11:22.757 "w_mbytes_per_sec": 0 00:11:22.757 }, 00:11:22.757 "claimed": true, 00:11:22.757 "claim_type": "exclusive_write", 00:11:22.757 "zoned": false, 00:11:22.757 "supported_io_types": { 00:11:22.757 "read": true, 00:11:22.757 "write": true, 00:11:22.757 "unmap": true, 00:11:22.757 "flush": true, 00:11:22.757 "reset": true, 00:11:22.757 "nvme_admin": false, 00:11:22.757 "nvme_io": false, 00:11:22.757 "nvme_io_md": false, 00:11:22.757 "write_zeroes": true, 00:11:22.757 "zcopy": true, 00:11:22.757 "get_zone_info": false, 00:11:22.757 "zone_management": false, 00:11:22.757 "zone_append": false, 00:11:22.757 "compare": false, 00:11:22.757 "compare_and_write": false, 00:11:22.757 "abort": true, 00:11:22.757 "seek_hole": false, 00:11:22.757 "seek_data": false, 00:11:22.757 "copy": true, 00:11:22.757 "nvme_iov_md": false 00:11:22.757 }, 00:11:22.757 "memory_domains": [ 00:11:22.757 { 00:11:22.757 "dma_device_id": "system", 00:11:22.757 "dma_device_type": 1 00:11:22.757 }, 00:11:22.757 { 00:11:22.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.757 "dma_device_type": 2 00:11:22.757 } 00:11:22.757 ], 00:11:22.757 "driver_specific": {} 00:11:22.757 } 00:11:22.757 ] 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.757 "name": "Existed_Raid", 00:11:22.757 "uuid": "be6a6dc6-1c34-4f7a-a5e5-e54437dd53a6", 00:11:22.757 "strip_size_kb": 0, 00:11:22.757 "state": "online", 00:11:22.757 "raid_level": "raid1", 00:11:22.757 "superblock": true, 00:11:22.757 "num_base_bdevs": 3, 00:11:22.757 "num_base_bdevs_discovered": 3, 00:11:22.757 "num_base_bdevs_operational": 3, 00:11:22.757 "base_bdevs_list": [ 00:11:22.757 { 00:11:22.757 "name": "BaseBdev1", 00:11:22.757 "uuid": "becfa7e7-61e4-453b-ba80-acb8b7c4e6ba", 00:11:22.757 "is_configured": true, 00:11:22.757 "data_offset": 2048, 00:11:22.757 "data_size": 63488 00:11:22.757 }, 00:11:22.757 { 00:11:22.757 "name": "BaseBdev2", 00:11:22.757 "uuid": "fe36b4ca-b65e-47a5-9252-ecfee34519b8", 00:11:22.757 "is_configured": true, 00:11:22.757 "data_offset": 2048, 00:11:22.757 "data_size": 63488 00:11:22.757 }, 00:11:22.757 { 00:11:22.757 "name": "BaseBdev3", 00:11:22.757 "uuid": "7e0a0308-5da6-4334-9308-e476c448ea91", 00:11:22.757 "is_configured": true, 00:11:22.757 "data_offset": 2048, 00:11:22.757 "data_size": 63488 00:11:22.757 } 00:11:22.757 ] 00:11:22.757 }' 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.757 14:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.326 [2024-11-20 14:22:02.070363] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:23.326 "name": "Existed_Raid", 00:11:23.326 "aliases": [ 00:11:23.326 "be6a6dc6-1c34-4f7a-a5e5-e54437dd53a6" 00:11:23.326 ], 00:11:23.326 "product_name": "Raid Volume", 00:11:23.326 "block_size": 512, 00:11:23.326 "num_blocks": 63488, 00:11:23.326 "uuid": "be6a6dc6-1c34-4f7a-a5e5-e54437dd53a6", 00:11:23.326 "assigned_rate_limits": { 00:11:23.326 "rw_ios_per_sec": 0, 00:11:23.326 "rw_mbytes_per_sec": 0, 00:11:23.326 "r_mbytes_per_sec": 0, 00:11:23.326 "w_mbytes_per_sec": 0 00:11:23.326 }, 00:11:23.326 "claimed": false, 00:11:23.326 "zoned": false, 00:11:23.326 "supported_io_types": { 00:11:23.326 "read": true, 00:11:23.326 "write": true, 00:11:23.326 "unmap": false, 00:11:23.326 "flush": false, 00:11:23.326 "reset": true, 00:11:23.326 "nvme_admin": false, 00:11:23.326 "nvme_io": false, 00:11:23.326 "nvme_io_md": false, 00:11:23.326 "write_zeroes": true, 00:11:23.326 "zcopy": false, 00:11:23.326 "get_zone_info": false, 00:11:23.326 "zone_management": false, 00:11:23.326 "zone_append": false, 00:11:23.326 "compare": false, 00:11:23.326 "compare_and_write": false, 00:11:23.326 "abort": false, 00:11:23.326 "seek_hole": false, 00:11:23.326 "seek_data": false, 00:11:23.326 "copy": false, 00:11:23.326 "nvme_iov_md": false 00:11:23.326 }, 00:11:23.326 "memory_domains": [ 00:11:23.326 { 00:11:23.326 "dma_device_id": "system", 00:11:23.326 "dma_device_type": 1 00:11:23.326 }, 00:11:23.326 { 00:11:23.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.326 "dma_device_type": 2 00:11:23.326 }, 00:11:23.326 { 00:11:23.326 "dma_device_id": "system", 00:11:23.326 "dma_device_type": 1 00:11:23.326 }, 00:11:23.326 { 00:11:23.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.326 "dma_device_type": 2 00:11:23.326 }, 00:11:23.326 { 00:11:23.326 "dma_device_id": "system", 00:11:23.326 "dma_device_type": 1 00:11:23.326 }, 00:11:23.326 { 00:11:23.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.326 "dma_device_type": 2 00:11:23.326 } 00:11:23.326 ], 00:11:23.326 "driver_specific": { 00:11:23.326 "raid": { 00:11:23.326 "uuid": "be6a6dc6-1c34-4f7a-a5e5-e54437dd53a6", 00:11:23.326 "strip_size_kb": 0, 00:11:23.326 "state": "online", 00:11:23.326 "raid_level": "raid1", 00:11:23.326 "superblock": true, 00:11:23.326 "num_base_bdevs": 3, 00:11:23.326 "num_base_bdevs_discovered": 3, 00:11:23.326 "num_base_bdevs_operational": 3, 00:11:23.326 "base_bdevs_list": [ 00:11:23.326 { 00:11:23.326 "name": "BaseBdev1", 00:11:23.326 "uuid": "becfa7e7-61e4-453b-ba80-acb8b7c4e6ba", 00:11:23.326 "is_configured": true, 00:11:23.326 "data_offset": 2048, 00:11:23.326 "data_size": 63488 00:11:23.326 }, 00:11:23.326 { 00:11:23.326 "name": "BaseBdev2", 00:11:23.326 "uuid": "fe36b4ca-b65e-47a5-9252-ecfee34519b8", 00:11:23.326 "is_configured": true, 00:11:23.326 "data_offset": 2048, 00:11:23.326 "data_size": 63488 00:11:23.326 }, 00:11:23.326 { 00:11:23.326 "name": "BaseBdev3", 00:11:23.326 "uuid": "7e0a0308-5da6-4334-9308-e476c448ea91", 00:11:23.326 "is_configured": true, 00:11:23.326 "data_offset": 2048, 00:11:23.326 "data_size": 63488 00:11:23.326 } 00:11:23.326 ] 00:11:23.326 } 00:11:23.326 } 00:11:23.326 }' 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:23.326 BaseBdev2 00:11:23.326 BaseBdev3' 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.326 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:23.327 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.327 14:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.327 14:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.327 14:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.586 [2024-11-20 14:22:02.374111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.586 "name": "Existed_Raid", 00:11:23.586 "uuid": "be6a6dc6-1c34-4f7a-a5e5-e54437dd53a6", 00:11:23.586 "strip_size_kb": 0, 00:11:23.586 "state": "online", 00:11:23.586 "raid_level": "raid1", 00:11:23.586 "superblock": true, 00:11:23.586 "num_base_bdevs": 3, 00:11:23.586 "num_base_bdevs_discovered": 2, 00:11:23.586 "num_base_bdevs_operational": 2, 00:11:23.586 "base_bdevs_list": [ 00:11:23.586 { 00:11:23.586 "name": null, 00:11:23.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.586 "is_configured": false, 00:11:23.586 "data_offset": 0, 00:11:23.586 "data_size": 63488 00:11:23.586 }, 00:11:23.586 { 00:11:23.586 "name": "BaseBdev2", 00:11:23.586 "uuid": "fe36b4ca-b65e-47a5-9252-ecfee34519b8", 00:11:23.586 "is_configured": true, 00:11:23.586 "data_offset": 2048, 00:11:23.586 "data_size": 63488 00:11:23.586 }, 00:11:23.586 { 00:11:23.586 "name": "BaseBdev3", 00:11:23.586 "uuid": "7e0a0308-5da6-4334-9308-e476c448ea91", 00:11:23.586 "is_configured": true, 00:11:23.586 "data_offset": 2048, 00:11:23.586 "data_size": 63488 00:11:23.586 } 00:11:23.586 ] 00:11:23.586 }' 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.586 14:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.154 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:24.154 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:24.154 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.154 14:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:24.154 14:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.154 14:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.154 14:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.154 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:24.154 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:24.154 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:24.154 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.154 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.154 [2024-11-20 14:22:03.025282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:24.154 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.154 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:24.154 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:24.154 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.154 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.154 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.154 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:24.154 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.413 [2024-11-20 14:22:03.175254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:24.413 [2024-11-20 14:22:03.175386] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:24.413 [2024-11-20 14:22:03.263050] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:24.413 [2024-11-20 14:22:03.263143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:24.413 [2024-11-20 14:22:03.263165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.413 BaseBdev2 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.413 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.413 [ 00:11:24.413 { 00:11:24.413 "name": "BaseBdev2", 00:11:24.413 "aliases": [ 00:11:24.413 "8dd6ed85-147f-47ce-b923-d03b23ac782b" 00:11:24.413 ], 00:11:24.413 "product_name": "Malloc disk", 00:11:24.413 "block_size": 512, 00:11:24.413 "num_blocks": 65536, 00:11:24.413 "uuid": "8dd6ed85-147f-47ce-b923-d03b23ac782b", 00:11:24.413 "assigned_rate_limits": { 00:11:24.413 "rw_ios_per_sec": 0, 00:11:24.413 "rw_mbytes_per_sec": 0, 00:11:24.413 "r_mbytes_per_sec": 0, 00:11:24.413 "w_mbytes_per_sec": 0 00:11:24.413 }, 00:11:24.413 "claimed": false, 00:11:24.413 "zoned": false, 00:11:24.413 "supported_io_types": { 00:11:24.413 "read": true, 00:11:24.413 "write": true, 00:11:24.413 "unmap": true, 00:11:24.413 "flush": true, 00:11:24.413 "reset": true, 00:11:24.413 "nvme_admin": false, 00:11:24.413 "nvme_io": false, 00:11:24.413 "nvme_io_md": false, 00:11:24.413 "write_zeroes": true, 00:11:24.413 "zcopy": true, 00:11:24.413 "get_zone_info": false, 00:11:24.413 "zone_management": false, 00:11:24.413 "zone_append": false, 00:11:24.413 "compare": false, 00:11:24.413 "compare_and_write": false, 00:11:24.413 "abort": true, 00:11:24.413 "seek_hole": false, 00:11:24.413 "seek_data": false, 00:11:24.413 "copy": true, 00:11:24.413 "nvme_iov_md": false 00:11:24.413 }, 00:11:24.413 "memory_domains": [ 00:11:24.413 { 00:11:24.413 "dma_device_id": "system", 00:11:24.413 "dma_device_type": 1 00:11:24.413 }, 00:11:24.413 { 00:11:24.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.413 "dma_device_type": 2 00:11:24.413 } 00:11:24.413 ], 00:11:24.672 "driver_specific": {} 00:11:24.672 } 00:11:24.672 ] 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.672 BaseBdev3 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.672 [ 00:11:24.672 { 00:11:24.672 "name": "BaseBdev3", 00:11:24.672 "aliases": [ 00:11:24.672 "a64f8a45-9b2f-415b-b4d2-f4c30221d5d9" 00:11:24.672 ], 00:11:24.672 "product_name": "Malloc disk", 00:11:24.672 "block_size": 512, 00:11:24.672 "num_blocks": 65536, 00:11:24.672 "uuid": "a64f8a45-9b2f-415b-b4d2-f4c30221d5d9", 00:11:24.672 "assigned_rate_limits": { 00:11:24.672 "rw_ios_per_sec": 0, 00:11:24.672 "rw_mbytes_per_sec": 0, 00:11:24.672 "r_mbytes_per_sec": 0, 00:11:24.672 "w_mbytes_per_sec": 0 00:11:24.672 }, 00:11:24.672 "claimed": false, 00:11:24.672 "zoned": false, 00:11:24.672 "supported_io_types": { 00:11:24.672 "read": true, 00:11:24.672 "write": true, 00:11:24.672 "unmap": true, 00:11:24.672 "flush": true, 00:11:24.672 "reset": true, 00:11:24.672 "nvme_admin": false, 00:11:24.672 "nvme_io": false, 00:11:24.672 "nvme_io_md": false, 00:11:24.672 "write_zeroes": true, 00:11:24.672 "zcopy": true, 00:11:24.672 "get_zone_info": false, 00:11:24.672 "zone_management": false, 00:11:24.672 "zone_append": false, 00:11:24.672 "compare": false, 00:11:24.672 "compare_and_write": false, 00:11:24.672 "abort": true, 00:11:24.672 "seek_hole": false, 00:11:24.672 "seek_data": false, 00:11:24.672 "copy": true, 00:11:24.672 "nvme_iov_md": false 00:11:24.672 }, 00:11:24.672 "memory_domains": [ 00:11:24.672 { 00:11:24.672 "dma_device_id": "system", 00:11:24.672 "dma_device_type": 1 00:11:24.672 }, 00:11:24.672 { 00:11:24.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.672 "dma_device_type": 2 00:11:24.672 } 00:11:24.672 ], 00:11:24.672 "driver_specific": {} 00:11:24.672 } 00:11:24.672 ] 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.672 [2024-11-20 14:22:03.473352] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:24.672 [2024-11-20 14:22:03.473592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:24.672 [2024-11-20 14:22:03.473775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.672 [2024-11-20 14:22:03.476447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.672 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.672 "name": "Existed_Raid", 00:11:24.672 "uuid": "c539e156-01cc-47ce-82c6-5c6af2a30479", 00:11:24.672 "strip_size_kb": 0, 00:11:24.672 "state": "configuring", 00:11:24.672 "raid_level": "raid1", 00:11:24.672 "superblock": true, 00:11:24.672 "num_base_bdevs": 3, 00:11:24.672 "num_base_bdevs_discovered": 2, 00:11:24.672 "num_base_bdevs_operational": 3, 00:11:24.672 "base_bdevs_list": [ 00:11:24.672 { 00:11:24.672 "name": "BaseBdev1", 00:11:24.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.672 "is_configured": false, 00:11:24.672 "data_offset": 0, 00:11:24.672 "data_size": 0 00:11:24.672 }, 00:11:24.672 { 00:11:24.672 "name": "BaseBdev2", 00:11:24.672 "uuid": "8dd6ed85-147f-47ce-b923-d03b23ac782b", 00:11:24.672 "is_configured": true, 00:11:24.672 "data_offset": 2048, 00:11:24.672 "data_size": 63488 00:11:24.672 }, 00:11:24.672 { 00:11:24.672 "name": "BaseBdev3", 00:11:24.672 "uuid": "a64f8a45-9b2f-415b-b4d2-f4c30221d5d9", 00:11:24.672 "is_configured": true, 00:11:24.673 "data_offset": 2048, 00:11:24.673 "data_size": 63488 00:11:24.673 } 00:11:24.673 ] 00:11:24.673 }' 00:11:24.673 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.673 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.240 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:25.240 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.240 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.240 [2024-11-20 14:22:03.993491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:25.240 14:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.240 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:25.240 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.240 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.240 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.240 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.240 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.240 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.240 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.240 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.240 14:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.240 14:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.240 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.240 14:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.240 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.240 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.240 14:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.240 "name": "Existed_Raid", 00:11:25.240 "uuid": "c539e156-01cc-47ce-82c6-5c6af2a30479", 00:11:25.240 "strip_size_kb": 0, 00:11:25.240 "state": "configuring", 00:11:25.240 "raid_level": "raid1", 00:11:25.240 "superblock": true, 00:11:25.240 "num_base_bdevs": 3, 00:11:25.240 "num_base_bdevs_discovered": 1, 00:11:25.240 "num_base_bdevs_operational": 3, 00:11:25.240 "base_bdevs_list": [ 00:11:25.240 { 00:11:25.240 "name": "BaseBdev1", 00:11:25.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.240 "is_configured": false, 00:11:25.240 "data_offset": 0, 00:11:25.240 "data_size": 0 00:11:25.240 }, 00:11:25.240 { 00:11:25.240 "name": null, 00:11:25.240 "uuid": "8dd6ed85-147f-47ce-b923-d03b23ac782b", 00:11:25.240 "is_configured": false, 00:11:25.240 "data_offset": 0, 00:11:25.241 "data_size": 63488 00:11:25.241 }, 00:11:25.241 { 00:11:25.241 "name": "BaseBdev3", 00:11:25.241 "uuid": "a64f8a45-9b2f-415b-b4d2-f4c30221d5d9", 00:11:25.241 "is_configured": true, 00:11:25.241 "data_offset": 2048, 00:11:25.241 "data_size": 63488 00:11:25.241 } 00:11:25.241 ] 00:11:25.241 }' 00:11:25.241 14:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.241 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.809 14:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.809 14:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:25.809 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.810 [2024-11-20 14:22:04.576143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.810 BaseBdev1 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.810 [ 00:11:25.810 { 00:11:25.810 "name": "BaseBdev1", 00:11:25.810 "aliases": [ 00:11:25.810 "ffe98093-3ea3-4f08-81d9-b15b90b9e860" 00:11:25.810 ], 00:11:25.810 "product_name": "Malloc disk", 00:11:25.810 "block_size": 512, 00:11:25.810 "num_blocks": 65536, 00:11:25.810 "uuid": "ffe98093-3ea3-4f08-81d9-b15b90b9e860", 00:11:25.810 "assigned_rate_limits": { 00:11:25.810 "rw_ios_per_sec": 0, 00:11:25.810 "rw_mbytes_per_sec": 0, 00:11:25.810 "r_mbytes_per_sec": 0, 00:11:25.810 "w_mbytes_per_sec": 0 00:11:25.810 }, 00:11:25.810 "claimed": true, 00:11:25.810 "claim_type": "exclusive_write", 00:11:25.810 "zoned": false, 00:11:25.810 "supported_io_types": { 00:11:25.810 "read": true, 00:11:25.810 "write": true, 00:11:25.810 "unmap": true, 00:11:25.810 "flush": true, 00:11:25.810 "reset": true, 00:11:25.810 "nvme_admin": false, 00:11:25.810 "nvme_io": false, 00:11:25.810 "nvme_io_md": false, 00:11:25.810 "write_zeroes": true, 00:11:25.810 "zcopy": true, 00:11:25.810 "get_zone_info": false, 00:11:25.810 "zone_management": false, 00:11:25.810 "zone_append": false, 00:11:25.810 "compare": false, 00:11:25.810 "compare_and_write": false, 00:11:25.810 "abort": true, 00:11:25.810 "seek_hole": false, 00:11:25.810 "seek_data": false, 00:11:25.810 "copy": true, 00:11:25.810 "nvme_iov_md": false 00:11:25.810 }, 00:11:25.810 "memory_domains": [ 00:11:25.810 { 00:11:25.810 "dma_device_id": "system", 00:11:25.810 "dma_device_type": 1 00:11:25.810 }, 00:11:25.810 { 00:11:25.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.810 "dma_device_type": 2 00:11:25.810 } 00:11:25.810 ], 00:11:25.810 "driver_specific": {} 00:11:25.810 } 00:11:25.810 ] 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.810 "name": "Existed_Raid", 00:11:25.810 "uuid": "c539e156-01cc-47ce-82c6-5c6af2a30479", 00:11:25.810 "strip_size_kb": 0, 00:11:25.810 "state": "configuring", 00:11:25.810 "raid_level": "raid1", 00:11:25.810 "superblock": true, 00:11:25.810 "num_base_bdevs": 3, 00:11:25.810 "num_base_bdevs_discovered": 2, 00:11:25.810 "num_base_bdevs_operational": 3, 00:11:25.810 "base_bdevs_list": [ 00:11:25.810 { 00:11:25.810 "name": "BaseBdev1", 00:11:25.810 "uuid": "ffe98093-3ea3-4f08-81d9-b15b90b9e860", 00:11:25.810 "is_configured": true, 00:11:25.810 "data_offset": 2048, 00:11:25.810 "data_size": 63488 00:11:25.810 }, 00:11:25.810 { 00:11:25.810 "name": null, 00:11:25.810 "uuid": "8dd6ed85-147f-47ce-b923-d03b23ac782b", 00:11:25.810 "is_configured": false, 00:11:25.810 "data_offset": 0, 00:11:25.810 "data_size": 63488 00:11:25.810 }, 00:11:25.810 { 00:11:25.810 "name": "BaseBdev3", 00:11:25.810 "uuid": "a64f8a45-9b2f-415b-b4d2-f4c30221d5d9", 00:11:25.810 "is_configured": true, 00:11:25.810 "data_offset": 2048, 00:11:25.810 "data_size": 63488 00:11:25.810 } 00:11:25.810 ] 00:11:25.810 }' 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.810 14:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.378 [2024-11-20 14:22:05.184282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.378 "name": "Existed_Raid", 00:11:26.378 "uuid": "c539e156-01cc-47ce-82c6-5c6af2a30479", 00:11:26.378 "strip_size_kb": 0, 00:11:26.378 "state": "configuring", 00:11:26.378 "raid_level": "raid1", 00:11:26.378 "superblock": true, 00:11:26.378 "num_base_bdevs": 3, 00:11:26.378 "num_base_bdevs_discovered": 1, 00:11:26.378 "num_base_bdevs_operational": 3, 00:11:26.378 "base_bdevs_list": [ 00:11:26.378 { 00:11:26.378 "name": "BaseBdev1", 00:11:26.378 "uuid": "ffe98093-3ea3-4f08-81d9-b15b90b9e860", 00:11:26.378 "is_configured": true, 00:11:26.378 "data_offset": 2048, 00:11:26.378 "data_size": 63488 00:11:26.378 }, 00:11:26.378 { 00:11:26.378 "name": null, 00:11:26.378 "uuid": "8dd6ed85-147f-47ce-b923-d03b23ac782b", 00:11:26.378 "is_configured": false, 00:11:26.378 "data_offset": 0, 00:11:26.378 "data_size": 63488 00:11:26.378 }, 00:11:26.378 { 00:11:26.378 "name": null, 00:11:26.378 "uuid": "a64f8a45-9b2f-415b-b4d2-f4c30221d5d9", 00:11:26.378 "is_configured": false, 00:11:26.378 "data_offset": 0, 00:11:26.378 "data_size": 63488 00:11:26.378 } 00:11:26.378 ] 00:11:26.378 }' 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.378 14:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.948 [2024-11-20 14:22:05.752511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.948 "name": "Existed_Raid", 00:11:26.948 "uuid": "c539e156-01cc-47ce-82c6-5c6af2a30479", 00:11:26.948 "strip_size_kb": 0, 00:11:26.948 "state": "configuring", 00:11:26.948 "raid_level": "raid1", 00:11:26.948 "superblock": true, 00:11:26.948 "num_base_bdevs": 3, 00:11:26.948 "num_base_bdevs_discovered": 2, 00:11:26.948 "num_base_bdevs_operational": 3, 00:11:26.948 "base_bdevs_list": [ 00:11:26.948 { 00:11:26.948 "name": "BaseBdev1", 00:11:26.948 "uuid": "ffe98093-3ea3-4f08-81d9-b15b90b9e860", 00:11:26.948 "is_configured": true, 00:11:26.948 "data_offset": 2048, 00:11:26.948 "data_size": 63488 00:11:26.948 }, 00:11:26.948 { 00:11:26.948 "name": null, 00:11:26.948 "uuid": "8dd6ed85-147f-47ce-b923-d03b23ac782b", 00:11:26.948 "is_configured": false, 00:11:26.948 "data_offset": 0, 00:11:26.948 "data_size": 63488 00:11:26.948 }, 00:11:26.948 { 00:11:26.948 "name": "BaseBdev3", 00:11:26.948 "uuid": "a64f8a45-9b2f-415b-b4d2-f4c30221d5d9", 00:11:26.948 "is_configured": true, 00:11:26.948 "data_offset": 2048, 00:11:26.948 "data_size": 63488 00:11:26.948 } 00:11:26.948 ] 00:11:26.948 }' 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.948 14:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.515 [2024-11-20 14:22:06.316742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.515 "name": "Existed_Raid", 00:11:27.515 "uuid": "c539e156-01cc-47ce-82c6-5c6af2a30479", 00:11:27.515 "strip_size_kb": 0, 00:11:27.515 "state": "configuring", 00:11:27.515 "raid_level": "raid1", 00:11:27.515 "superblock": true, 00:11:27.515 "num_base_bdevs": 3, 00:11:27.515 "num_base_bdevs_discovered": 1, 00:11:27.515 "num_base_bdevs_operational": 3, 00:11:27.515 "base_bdevs_list": [ 00:11:27.515 { 00:11:27.515 "name": null, 00:11:27.515 "uuid": "ffe98093-3ea3-4f08-81d9-b15b90b9e860", 00:11:27.515 "is_configured": false, 00:11:27.515 "data_offset": 0, 00:11:27.515 "data_size": 63488 00:11:27.515 }, 00:11:27.515 { 00:11:27.515 "name": null, 00:11:27.515 "uuid": "8dd6ed85-147f-47ce-b923-d03b23ac782b", 00:11:27.515 "is_configured": false, 00:11:27.515 "data_offset": 0, 00:11:27.515 "data_size": 63488 00:11:27.515 }, 00:11:27.515 { 00:11:27.515 "name": "BaseBdev3", 00:11:27.515 "uuid": "a64f8a45-9b2f-415b-b4d2-f4c30221d5d9", 00:11:27.515 "is_configured": true, 00:11:27.515 "data_offset": 2048, 00:11:27.515 "data_size": 63488 00:11:27.515 } 00:11:27.515 ] 00:11:27.515 }' 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.515 14:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.084 [2024-11-20 14:22:06.956991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.084 14:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.084 14:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.084 "name": "Existed_Raid", 00:11:28.084 "uuid": "c539e156-01cc-47ce-82c6-5c6af2a30479", 00:11:28.084 "strip_size_kb": 0, 00:11:28.084 "state": "configuring", 00:11:28.084 "raid_level": "raid1", 00:11:28.084 "superblock": true, 00:11:28.084 "num_base_bdevs": 3, 00:11:28.084 "num_base_bdevs_discovered": 2, 00:11:28.084 "num_base_bdevs_operational": 3, 00:11:28.084 "base_bdevs_list": [ 00:11:28.084 { 00:11:28.084 "name": null, 00:11:28.084 "uuid": "ffe98093-3ea3-4f08-81d9-b15b90b9e860", 00:11:28.084 "is_configured": false, 00:11:28.084 "data_offset": 0, 00:11:28.084 "data_size": 63488 00:11:28.084 }, 00:11:28.084 { 00:11:28.084 "name": "BaseBdev2", 00:11:28.084 "uuid": "8dd6ed85-147f-47ce-b923-d03b23ac782b", 00:11:28.084 "is_configured": true, 00:11:28.084 "data_offset": 2048, 00:11:28.084 "data_size": 63488 00:11:28.084 }, 00:11:28.084 { 00:11:28.084 "name": "BaseBdev3", 00:11:28.084 "uuid": "a64f8a45-9b2f-415b-b4d2-f4c30221d5d9", 00:11:28.084 "is_configured": true, 00:11:28.084 "data_offset": 2048, 00:11:28.084 "data_size": 63488 00:11:28.084 } 00:11:28.084 ] 00:11:28.084 }' 00:11:28.084 14:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.084 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ffe98093-3ea3-4f08-81d9-b15b90b9e860 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.652 [2024-11-20 14:22:07.606302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:28.652 [2024-11-20 14:22:07.606563] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:28.652 [2024-11-20 14:22:07.606583] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:28.652 NewBaseBdev 00:11:28.652 [2024-11-20 14:22:07.606908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:28.652 [2024-11-20 14:22:07.607126] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:28.652 [2024-11-20 14:22:07.607158] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:28.652 [2024-11-20 14:22:07.607323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.652 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.652 [ 00:11:28.652 { 00:11:28.652 "name": "NewBaseBdev", 00:11:28.652 "aliases": [ 00:11:28.652 "ffe98093-3ea3-4f08-81d9-b15b90b9e860" 00:11:28.652 ], 00:11:28.652 "product_name": "Malloc disk", 00:11:28.652 "block_size": 512, 00:11:28.652 "num_blocks": 65536, 00:11:28.652 "uuid": "ffe98093-3ea3-4f08-81d9-b15b90b9e860", 00:11:28.652 "assigned_rate_limits": { 00:11:28.652 "rw_ios_per_sec": 0, 00:11:28.652 "rw_mbytes_per_sec": 0, 00:11:28.652 "r_mbytes_per_sec": 0, 00:11:28.652 "w_mbytes_per_sec": 0 00:11:28.652 }, 00:11:28.652 "claimed": true, 00:11:28.652 "claim_type": "exclusive_write", 00:11:28.652 "zoned": false, 00:11:28.652 "supported_io_types": { 00:11:28.652 "read": true, 00:11:28.652 "write": true, 00:11:28.652 "unmap": true, 00:11:28.652 "flush": true, 00:11:28.652 "reset": true, 00:11:28.652 "nvme_admin": false, 00:11:28.911 "nvme_io": false, 00:11:28.911 "nvme_io_md": false, 00:11:28.911 "write_zeroes": true, 00:11:28.911 "zcopy": true, 00:11:28.911 "get_zone_info": false, 00:11:28.911 "zone_management": false, 00:11:28.911 "zone_append": false, 00:11:28.911 "compare": false, 00:11:28.911 "compare_and_write": false, 00:11:28.911 "abort": true, 00:11:28.911 "seek_hole": false, 00:11:28.911 "seek_data": false, 00:11:28.911 "copy": true, 00:11:28.911 "nvme_iov_md": false 00:11:28.911 }, 00:11:28.911 "memory_domains": [ 00:11:28.911 { 00:11:28.911 "dma_device_id": "system", 00:11:28.911 "dma_device_type": 1 00:11:28.911 }, 00:11:28.911 { 00:11:28.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.911 "dma_device_type": 2 00:11:28.911 } 00:11:28.911 ], 00:11:28.911 "driver_specific": {} 00:11:28.911 } 00:11:28.911 ] 00:11:28.911 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.911 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:28.911 14:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:28.911 14:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.911 14:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.911 14:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.911 14:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.911 14:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.911 14:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.911 14:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.911 14:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.911 14:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.911 14:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.911 14:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.911 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.911 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.911 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.911 14:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.911 "name": "Existed_Raid", 00:11:28.911 "uuid": "c539e156-01cc-47ce-82c6-5c6af2a30479", 00:11:28.911 "strip_size_kb": 0, 00:11:28.911 "state": "online", 00:11:28.911 "raid_level": "raid1", 00:11:28.911 "superblock": true, 00:11:28.911 "num_base_bdevs": 3, 00:11:28.912 "num_base_bdevs_discovered": 3, 00:11:28.912 "num_base_bdevs_operational": 3, 00:11:28.912 "base_bdevs_list": [ 00:11:28.912 { 00:11:28.912 "name": "NewBaseBdev", 00:11:28.912 "uuid": "ffe98093-3ea3-4f08-81d9-b15b90b9e860", 00:11:28.912 "is_configured": true, 00:11:28.912 "data_offset": 2048, 00:11:28.912 "data_size": 63488 00:11:28.912 }, 00:11:28.912 { 00:11:28.912 "name": "BaseBdev2", 00:11:28.912 "uuid": "8dd6ed85-147f-47ce-b923-d03b23ac782b", 00:11:28.912 "is_configured": true, 00:11:28.912 "data_offset": 2048, 00:11:28.912 "data_size": 63488 00:11:28.912 }, 00:11:28.912 { 00:11:28.912 "name": "BaseBdev3", 00:11:28.912 "uuid": "a64f8a45-9b2f-415b-b4d2-f4c30221d5d9", 00:11:28.912 "is_configured": true, 00:11:28.912 "data_offset": 2048, 00:11:28.912 "data_size": 63488 00:11:28.912 } 00:11:28.912 ] 00:11:28.912 }' 00:11:28.912 14:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.912 14:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.171 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:29.171 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:29.171 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:29.171 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:29.171 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:29.171 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:29.171 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:29.171 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.171 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.171 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:29.171 [2024-11-20 14:22:08.143141] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.430 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.430 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:29.430 "name": "Existed_Raid", 00:11:29.430 "aliases": [ 00:11:29.430 "c539e156-01cc-47ce-82c6-5c6af2a30479" 00:11:29.430 ], 00:11:29.430 "product_name": "Raid Volume", 00:11:29.430 "block_size": 512, 00:11:29.430 "num_blocks": 63488, 00:11:29.430 "uuid": "c539e156-01cc-47ce-82c6-5c6af2a30479", 00:11:29.430 "assigned_rate_limits": { 00:11:29.430 "rw_ios_per_sec": 0, 00:11:29.430 "rw_mbytes_per_sec": 0, 00:11:29.430 "r_mbytes_per_sec": 0, 00:11:29.430 "w_mbytes_per_sec": 0 00:11:29.430 }, 00:11:29.430 "claimed": false, 00:11:29.430 "zoned": false, 00:11:29.431 "supported_io_types": { 00:11:29.431 "read": true, 00:11:29.431 "write": true, 00:11:29.431 "unmap": false, 00:11:29.431 "flush": false, 00:11:29.431 "reset": true, 00:11:29.431 "nvme_admin": false, 00:11:29.431 "nvme_io": false, 00:11:29.431 "nvme_io_md": false, 00:11:29.431 "write_zeroes": true, 00:11:29.431 "zcopy": false, 00:11:29.431 "get_zone_info": false, 00:11:29.431 "zone_management": false, 00:11:29.431 "zone_append": false, 00:11:29.431 "compare": false, 00:11:29.431 "compare_and_write": false, 00:11:29.431 "abort": false, 00:11:29.431 "seek_hole": false, 00:11:29.431 "seek_data": false, 00:11:29.431 "copy": false, 00:11:29.431 "nvme_iov_md": false 00:11:29.431 }, 00:11:29.431 "memory_domains": [ 00:11:29.431 { 00:11:29.431 "dma_device_id": "system", 00:11:29.431 "dma_device_type": 1 00:11:29.431 }, 00:11:29.431 { 00:11:29.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.431 "dma_device_type": 2 00:11:29.431 }, 00:11:29.431 { 00:11:29.431 "dma_device_id": "system", 00:11:29.431 "dma_device_type": 1 00:11:29.431 }, 00:11:29.431 { 00:11:29.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.431 "dma_device_type": 2 00:11:29.431 }, 00:11:29.431 { 00:11:29.431 "dma_device_id": "system", 00:11:29.431 "dma_device_type": 1 00:11:29.431 }, 00:11:29.431 { 00:11:29.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.431 "dma_device_type": 2 00:11:29.431 } 00:11:29.431 ], 00:11:29.431 "driver_specific": { 00:11:29.431 "raid": { 00:11:29.431 "uuid": "c539e156-01cc-47ce-82c6-5c6af2a30479", 00:11:29.431 "strip_size_kb": 0, 00:11:29.431 "state": "online", 00:11:29.431 "raid_level": "raid1", 00:11:29.431 "superblock": true, 00:11:29.431 "num_base_bdevs": 3, 00:11:29.431 "num_base_bdevs_discovered": 3, 00:11:29.431 "num_base_bdevs_operational": 3, 00:11:29.431 "base_bdevs_list": [ 00:11:29.431 { 00:11:29.431 "name": "NewBaseBdev", 00:11:29.431 "uuid": "ffe98093-3ea3-4f08-81d9-b15b90b9e860", 00:11:29.431 "is_configured": true, 00:11:29.431 "data_offset": 2048, 00:11:29.431 "data_size": 63488 00:11:29.431 }, 00:11:29.431 { 00:11:29.431 "name": "BaseBdev2", 00:11:29.431 "uuid": "8dd6ed85-147f-47ce-b923-d03b23ac782b", 00:11:29.431 "is_configured": true, 00:11:29.431 "data_offset": 2048, 00:11:29.431 "data_size": 63488 00:11:29.431 }, 00:11:29.431 { 00:11:29.431 "name": "BaseBdev3", 00:11:29.431 "uuid": "a64f8a45-9b2f-415b-b4d2-f4c30221d5d9", 00:11:29.431 "is_configured": true, 00:11:29.431 "data_offset": 2048, 00:11:29.431 "data_size": 63488 00:11:29.431 } 00:11:29.431 ] 00:11:29.431 } 00:11:29.431 } 00:11:29.431 }' 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:29.431 BaseBdev2 00:11:29.431 BaseBdev3' 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.431 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.690 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.690 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.690 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.690 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:29.690 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.690 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.690 [2024-11-20 14:22:08.458807] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:29.690 [2024-11-20 14:22:08.458848] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:29.690 [2024-11-20 14:22:08.458936] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.690 [2024-11-20 14:22:08.459342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.690 [2024-11-20 14:22:08.459367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:29.690 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.690 14:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68096 00:11:29.690 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68096 ']' 00:11:29.690 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68096 00:11:29.690 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:29.690 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:29.690 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68096 00:11:29.691 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:29.691 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:29.691 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68096' 00:11:29.691 killing process with pid 68096 00:11:29.691 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68096 00:11:29.691 [2024-11-20 14:22:08.494186] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.691 14:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68096 00:11:29.949 [2024-11-20 14:22:08.747949] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:30.915 14:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:30.915 00:11:30.915 real 0m11.443s 00:11:30.915 user 0m18.953s 00:11:30.915 sys 0m1.575s 00:11:30.915 ************************************ 00:11:30.915 END TEST raid_state_function_test_sb 00:11:30.915 ************************************ 00:11:30.915 14:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.915 14:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.915 14:22:09 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:11:30.915 14:22:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:30.915 14:22:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.915 14:22:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:30.915 ************************************ 00:11:30.915 START TEST raid_superblock_test 00:11:30.915 ************************************ 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:30.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68729 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68729 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68729 ']' 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:30.915 14:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.174 [2024-11-20 14:22:09.916063] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:11:31.174 [2024-11-20 14:22:09.916223] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68729 ] 00:11:31.174 [2024-11-20 14:22:10.092128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.431 [2024-11-20 14:22:10.218450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.688 [2024-11-20 14:22:10.417146] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.688 [2024-11-20 14:22:10.417367] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.946 14:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:31.946 14:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:31.946 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:31.946 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:31.946 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:31.946 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:31.946 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:31.946 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:31.946 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:31.946 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:31.946 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:31.946 14:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.946 14:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.204 malloc1 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.204 [2024-11-20 14:22:10.937387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:32.204 [2024-11-20 14:22:10.937491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.204 [2024-11-20 14:22:10.937539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:32.204 [2024-11-20 14:22:10.937553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.204 [2024-11-20 14:22:10.940474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.204 [2024-11-20 14:22:10.940519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:32.204 pt1 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.204 malloc2 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.204 [2024-11-20 14:22:10.988759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:32.204 [2024-11-20 14:22:10.988850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.204 [2024-11-20 14:22:10.988888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:32.204 [2024-11-20 14:22:10.988902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.204 [2024-11-20 14:22:10.991681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.204 [2024-11-20 14:22:10.991863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:32.204 pt2 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.204 14:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.204 malloc3 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.204 [2024-11-20 14:22:11.046517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:32.204 [2024-11-20 14:22:11.046600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.204 [2024-11-20 14:22:11.046635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:32.204 [2024-11-20 14:22:11.046650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.204 [2024-11-20 14:22:11.049536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.204 [2024-11-20 14:22:11.049582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:32.204 pt3 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.204 [2024-11-20 14:22:11.054568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:32.204 [2024-11-20 14:22:11.057052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:32.204 [2024-11-20 14:22:11.057155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:32.204 [2024-11-20 14:22:11.057395] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:32.204 [2024-11-20 14:22:11.057425] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:32.204 [2024-11-20 14:22:11.057725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:32.204 [2024-11-20 14:22:11.057977] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:32.204 [2024-11-20 14:22:11.057998] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:32.204 [2024-11-20 14:22:11.058204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.204 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.204 "name": "raid_bdev1", 00:11:32.204 "uuid": "24df2142-4617-4f40-b64b-a81e8952781f", 00:11:32.204 "strip_size_kb": 0, 00:11:32.204 "state": "online", 00:11:32.204 "raid_level": "raid1", 00:11:32.204 "superblock": true, 00:11:32.204 "num_base_bdevs": 3, 00:11:32.205 "num_base_bdevs_discovered": 3, 00:11:32.205 "num_base_bdevs_operational": 3, 00:11:32.205 "base_bdevs_list": [ 00:11:32.205 { 00:11:32.205 "name": "pt1", 00:11:32.205 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:32.205 "is_configured": true, 00:11:32.205 "data_offset": 2048, 00:11:32.205 "data_size": 63488 00:11:32.205 }, 00:11:32.205 { 00:11:32.205 "name": "pt2", 00:11:32.205 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:32.205 "is_configured": true, 00:11:32.205 "data_offset": 2048, 00:11:32.205 "data_size": 63488 00:11:32.205 }, 00:11:32.205 { 00:11:32.205 "name": "pt3", 00:11:32.205 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:32.205 "is_configured": true, 00:11:32.205 "data_offset": 2048, 00:11:32.205 "data_size": 63488 00:11:32.205 } 00:11:32.205 ] 00:11:32.205 }' 00:11:32.205 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.205 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.770 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:32.770 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:32.770 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:32.770 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:32.770 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:32.770 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:32.770 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:32.770 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.770 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.770 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:32.770 [2024-11-20 14:22:11.567109] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:32.770 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.770 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:32.770 "name": "raid_bdev1", 00:11:32.770 "aliases": [ 00:11:32.770 "24df2142-4617-4f40-b64b-a81e8952781f" 00:11:32.770 ], 00:11:32.770 "product_name": "Raid Volume", 00:11:32.770 "block_size": 512, 00:11:32.770 "num_blocks": 63488, 00:11:32.770 "uuid": "24df2142-4617-4f40-b64b-a81e8952781f", 00:11:32.770 "assigned_rate_limits": { 00:11:32.771 "rw_ios_per_sec": 0, 00:11:32.771 "rw_mbytes_per_sec": 0, 00:11:32.771 "r_mbytes_per_sec": 0, 00:11:32.771 "w_mbytes_per_sec": 0 00:11:32.771 }, 00:11:32.771 "claimed": false, 00:11:32.771 "zoned": false, 00:11:32.771 "supported_io_types": { 00:11:32.771 "read": true, 00:11:32.771 "write": true, 00:11:32.771 "unmap": false, 00:11:32.771 "flush": false, 00:11:32.771 "reset": true, 00:11:32.771 "nvme_admin": false, 00:11:32.771 "nvme_io": false, 00:11:32.771 "nvme_io_md": false, 00:11:32.771 "write_zeroes": true, 00:11:32.771 "zcopy": false, 00:11:32.771 "get_zone_info": false, 00:11:32.771 "zone_management": false, 00:11:32.771 "zone_append": false, 00:11:32.771 "compare": false, 00:11:32.771 "compare_and_write": false, 00:11:32.771 "abort": false, 00:11:32.771 "seek_hole": false, 00:11:32.771 "seek_data": false, 00:11:32.771 "copy": false, 00:11:32.771 "nvme_iov_md": false 00:11:32.771 }, 00:11:32.771 "memory_domains": [ 00:11:32.771 { 00:11:32.771 "dma_device_id": "system", 00:11:32.771 "dma_device_type": 1 00:11:32.771 }, 00:11:32.771 { 00:11:32.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.771 "dma_device_type": 2 00:11:32.771 }, 00:11:32.771 { 00:11:32.771 "dma_device_id": "system", 00:11:32.771 "dma_device_type": 1 00:11:32.771 }, 00:11:32.771 { 00:11:32.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.771 "dma_device_type": 2 00:11:32.771 }, 00:11:32.771 { 00:11:32.771 "dma_device_id": "system", 00:11:32.771 "dma_device_type": 1 00:11:32.771 }, 00:11:32.771 { 00:11:32.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.771 "dma_device_type": 2 00:11:32.771 } 00:11:32.771 ], 00:11:32.771 "driver_specific": { 00:11:32.771 "raid": { 00:11:32.771 "uuid": "24df2142-4617-4f40-b64b-a81e8952781f", 00:11:32.771 "strip_size_kb": 0, 00:11:32.771 "state": "online", 00:11:32.771 "raid_level": "raid1", 00:11:32.771 "superblock": true, 00:11:32.771 "num_base_bdevs": 3, 00:11:32.771 "num_base_bdevs_discovered": 3, 00:11:32.771 "num_base_bdevs_operational": 3, 00:11:32.771 "base_bdevs_list": [ 00:11:32.771 { 00:11:32.771 "name": "pt1", 00:11:32.771 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:32.771 "is_configured": true, 00:11:32.771 "data_offset": 2048, 00:11:32.771 "data_size": 63488 00:11:32.771 }, 00:11:32.771 { 00:11:32.771 "name": "pt2", 00:11:32.771 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:32.771 "is_configured": true, 00:11:32.771 "data_offset": 2048, 00:11:32.771 "data_size": 63488 00:11:32.771 }, 00:11:32.771 { 00:11:32.771 "name": "pt3", 00:11:32.771 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:32.771 "is_configured": true, 00:11:32.771 "data_offset": 2048, 00:11:32.771 "data_size": 63488 00:11:32.771 } 00:11:32.771 ] 00:11:32.771 } 00:11:32.771 } 00:11:32.771 }' 00:11:32.771 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:32.771 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:32.771 pt2 00:11:32.771 pt3' 00:11:32.771 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.771 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:32.771 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.771 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:32.771 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.771 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.771 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.771 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.029 [2024-11-20 14:22:11.891145] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=24df2142-4617-4f40-b64b-a81e8952781f 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 24df2142-4617-4f40-b64b-a81e8952781f ']' 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.029 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.029 [2024-11-20 14:22:11.942781] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:33.029 [2024-11-20 14:22:11.942812] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.029 [2024-11-20 14:22:11.942894] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.030 [2024-11-20 14:22:11.942985] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.030 [2024-11-20 14:22:11.943016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:33.030 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.030 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.030 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.030 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:33.030 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.030 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.030 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:33.030 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:33.030 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:33.030 14:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:33.030 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.030 14:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.030 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.030 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:33.030 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:33.030 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.030 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.289 [2024-11-20 14:22:12.082874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:33.289 [2024-11-20 14:22:12.085389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:33.289 [2024-11-20 14:22:12.085635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:33.289 [2024-11-20 14:22:12.085719] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:33.289 [2024-11-20 14:22:12.085797] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:33.289 [2024-11-20 14:22:12.085833] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:33.289 [2024-11-20 14:22:12.085862] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:33.289 [2024-11-20 14:22:12.085875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:33.289 request: 00:11:33.289 { 00:11:33.289 "name": "raid_bdev1", 00:11:33.289 "raid_level": "raid1", 00:11:33.289 "base_bdevs": [ 00:11:33.289 "malloc1", 00:11:33.289 "malloc2", 00:11:33.289 "malloc3" 00:11:33.289 ], 00:11:33.289 "superblock": false, 00:11:33.289 "method": "bdev_raid_create", 00:11:33.289 "req_id": 1 00:11:33.289 } 00:11:33.289 Got JSON-RPC error response 00:11:33.289 response: 00:11:33.289 { 00:11:33.289 "code": -17, 00:11:33.289 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:33.289 } 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:33.289 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.290 [2024-11-20 14:22:12.150836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:33.290 [2024-11-20 14:22:12.151064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.290 [2024-11-20 14:22:12.151159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:33.290 [2024-11-20 14:22:12.151292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.290 [2024-11-20 14:22:12.154200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.290 [2024-11-20 14:22:12.154366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:33.290 [2024-11-20 14:22:12.154581] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:33.290 [2024-11-20 14:22:12.154768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:33.290 pt1 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.290 "name": "raid_bdev1", 00:11:33.290 "uuid": "24df2142-4617-4f40-b64b-a81e8952781f", 00:11:33.290 "strip_size_kb": 0, 00:11:33.290 "state": "configuring", 00:11:33.290 "raid_level": "raid1", 00:11:33.290 "superblock": true, 00:11:33.290 "num_base_bdevs": 3, 00:11:33.290 "num_base_bdevs_discovered": 1, 00:11:33.290 "num_base_bdevs_operational": 3, 00:11:33.290 "base_bdevs_list": [ 00:11:33.290 { 00:11:33.290 "name": "pt1", 00:11:33.290 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.290 "is_configured": true, 00:11:33.290 "data_offset": 2048, 00:11:33.290 "data_size": 63488 00:11:33.290 }, 00:11:33.290 { 00:11:33.290 "name": null, 00:11:33.290 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.290 "is_configured": false, 00:11:33.290 "data_offset": 2048, 00:11:33.290 "data_size": 63488 00:11:33.290 }, 00:11:33.290 { 00:11:33.290 "name": null, 00:11:33.290 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:33.290 "is_configured": false, 00:11:33.290 "data_offset": 2048, 00:11:33.290 "data_size": 63488 00:11:33.290 } 00:11:33.290 ] 00:11:33.290 }' 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.290 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.857 [2024-11-20 14:22:12.707362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:33.857 [2024-11-20 14:22:12.707443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.857 [2024-11-20 14:22:12.707492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:33.857 [2024-11-20 14:22:12.707507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.857 [2024-11-20 14:22:12.708078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.857 [2024-11-20 14:22:12.708116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:33.857 [2024-11-20 14:22:12.708227] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:33.857 [2024-11-20 14:22:12.708261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:33.857 pt2 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.857 [2024-11-20 14:22:12.719339] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.857 "name": "raid_bdev1", 00:11:33.857 "uuid": "24df2142-4617-4f40-b64b-a81e8952781f", 00:11:33.857 "strip_size_kb": 0, 00:11:33.857 "state": "configuring", 00:11:33.857 "raid_level": "raid1", 00:11:33.857 "superblock": true, 00:11:33.857 "num_base_bdevs": 3, 00:11:33.857 "num_base_bdevs_discovered": 1, 00:11:33.857 "num_base_bdevs_operational": 3, 00:11:33.857 "base_bdevs_list": [ 00:11:33.857 { 00:11:33.857 "name": "pt1", 00:11:33.857 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.857 "is_configured": true, 00:11:33.857 "data_offset": 2048, 00:11:33.857 "data_size": 63488 00:11:33.857 }, 00:11:33.857 { 00:11:33.857 "name": null, 00:11:33.857 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.857 "is_configured": false, 00:11:33.857 "data_offset": 0, 00:11:33.857 "data_size": 63488 00:11:33.857 }, 00:11:33.857 { 00:11:33.857 "name": null, 00:11:33.857 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:33.857 "is_configured": false, 00:11:33.857 "data_offset": 2048, 00:11:33.857 "data_size": 63488 00:11:33.857 } 00:11:33.857 ] 00:11:33.857 }' 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.857 14:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.424 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:34.424 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:34.424 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.425 [2024-11-20 14:22:13.219483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:34.425 [2024-11-20 14:22:13.219577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.425 [2024-11-20 14:22:13.219608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:34.425 [2024-11-20 14:22:13.219626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.425 [2024-11-20 14:22:13.220229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.425 [2024-11-20 14:22:13.220259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:34.425 [2024-11-20 14:22:13.220361] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:34.425 [2024-11-20 14:22:13.220412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:34.425 pt2 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.425 [2024-11-20 14:22:13.231474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:34.425 [2024-11-20 14:22:13.231707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.425 [2024-11-20 14:22:13.231746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:34.425 [2024-11-20 14:22:13.231763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.425 [2024-11-20 14:22:13.232258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.425 [2024-11-20 14:22:13.232305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:34.425 [2024-11-20 14:22:13.232398] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:34.425 [2024-11-20 14:22:13.232434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:34.425 [2024-11-20 14:22:13.232593] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:34.425 [2024-11-20 14:22:13.232618] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:34.425 [2024-11-20 14:22:13.232937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:34.425 [2024-11-20 14:22:13.233175] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:34.425 [2024-11-20 14:22:13.233197] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:34.425 [2024-11-20 14:22:13.233391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.425 pt3 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.425 "name": "raid_bdev1", 00:11:34.425 "uuid": "24df2142-4617-4f40-b64b-a81e8952781f", 00:11:34.425 "strip_size_kb": 0, 00:11:34.425 "state": "online", 00:11:34.425 "raid_level": "raid1", 00:11:34.425 "superblock": true, 00:11:34.425 "num_base_bdevs": 3, 00:11:34.425 "num_base_bdevs_discovered": 3, 00:11:34.425 "num_base_bdevs_operational": 3, 00:11:34.425 "base_bdevs_list": [ 00:11:34.425 { 00:11:34.425 "name": "pt1", 00:11:34.425 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:34.425 "is_configured": true, 00:11:34.425 "data_offset": 2048, 00:11:34.425 "data_size": 63488 00:11:34.425 }, 00:11:34.425 { 00:11:34.425 "name": "pt2", 00:11:34.425 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:34.425 "is_configured": true, 00:11:34.425 "data_offset": 2048, 00:11:34.425 "data_size": 63488 00:11:34.425 }, 00:11:34.425 { 00:11:34.425 "name": "pt3", 00:11:34.425 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:34.425 "is_configured": true, 00:11:34.425 "data_offset": 2048, 00:11:34.425 "data_size": 63488 00:11:34.425 } 00:11:34.425 ] 00:11:34.425 }' 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.425 14:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.992 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:34.992 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:34.992 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:34.992 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:34.992 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:34.992 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:34.993 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:34.993 14:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.993 14:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.993 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:34.993 [2024-11-20 14:22:13.776022] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.993 14:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.993 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:34.993 "name": "raid_bdev1", 00:11:34.993 "aliases": [ 00:11:34.993 "24df2142-4617-4f40-b64b-a81e8952781f" 00:11:34.993 ], 00:11:34.993 "product_name": "Raid Volume", 00:11:34.993 "block_size": 512, 00:11:34.993 "num_blocks": 63488, 00:11:34.993 "uuid": "24df2142-4617-4f40-b64b-a81e8952781f", 00:11:34.993 "assigned_rate_limits": { 00:11:34.993 "rw_ios_per_sec": 0, 00:11:34.993 "rw_mbytes_per_sec": 0, 00:11:34.993 "r_mbytes_per_sec": 0, 00:11:34.993 "w_mbytes_per_sec": 0 00:11:34.993 }, 00:11:34.993 "claimed": false, 00:11:34.993 "zoned": false, 00:11:34.993 "supported_io_types": { 00:11:34.993 "read": true, 00:11:34.993 "write": true, 00:11:34.993 "unmap": false, 00:11:34.993 "flush": false, 00:11:34.993 "reset": true, 00:11:34.993 "nvme_admin": false, 00:11:34.993 "nvme_io": false, 00:11:34.993 "nvme_io_md": false, 00:11:34.993 "write_zeroes": true, 00:11:34.993 "zcopy": false, 00:11:34.993 "get_zone_info": false, 00:11:34.993 "zone_management": false, 00:11:34.993 "zone_append": false, 00:11:34.993 "compare": false, 00:11:34.993 "compare_and_write": false, 00:11:34.993 "abort": false, 00:11:34.993 "seek_hole": false, 00:11:34.993 "seek_data": false, 00:11:34.993 "copy": false, 00:11:34.993 "nvme_iov_md": false 00:11:34.993 }, 00:11:34.993 "memory_domains": [ 00:11:34.993 { 00:11:34.993 "dma_device_id": "system", 00:11:34.993 "dma_device_type": 1 00:11:34.993 }, 00:11:34.993 { 00:11:34.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.993 "dma_device_type": 2 00:11:34.993 }, 00:11:34.993 { 00:11:34.993 "dma_device_id": "system", 00:11:34.993 "dma_device_type": 1 00:11:34.993 }, 00:11:34.993 { 00:11:34.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.993 "dma_device_type": 2 00:11:34.993 }, 00:11:34.993 { 00:11:34.993 "dma_device_id": "system", 00:11:34.993 "dma_device_type": 1 00:11:34.993 }, 00:11:34.993 { 00:11:34.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.993 "dma_device_type": 2 00:11:34.993 } 00:11:34.993 ], 00:11:34.993 "driver_specific": { 00:11:34.993 "raid": { 00:11:34.993 "uuid": "24df2142-4617-4f40-b64b-a81e8952781f", 00:11:34.993 "strip_size_kb": 0, 00:11:34.993 "state": "online", 00:11:34.993 "raid_level": "raid1", 00:11:34.993 "superblock": true, 00:11:34.993 "num_base_bdevs": 3, 00:11:34.993 "num_base_bdevs_discovered": 3, 00:11:34.993 "num_base_bdevs_operational": 3, 00:11:34.993 "base_bdevs_list": [ 00:11:34.993 { 00:11:34.993 "name": "pt1", 00:11:34.993 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:34.993 "is_configured": true, 00:11:34.993 "data_offset": 2048, 00:11:34.993 "data_size": 63488 00:11:34.993 }, 00:11:34.993 { 00:11:34.993 "name": "pt2", 00:11:34.993 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:34.993 "is_configured": true, 00:11:34.993 "data_offset": 2048, 00:11:34.993 "data_size": 63488 00:11:34.993 }, 00:11:34.993 { 00:11:34.993 "name": "pt3", 00:11:34.993 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:34.993 "is_configured": true, 00:11:34.993 "data_offset": 2048, 00:11:34.993 "data_size": 63488 00:11:34.993 } 00:11:34.993 ] 00:11:34.993 } 00:11:34.993 } 00:11:34.993 }' 00:11:34.993 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:34.993 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:34.993 pt2 00:11:34.993 pt3' 00:11:34.993 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.993 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:34.993 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.993 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:34.993 14:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.993 14:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.993 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.993 14:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.252 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.252 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.252 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.252 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:35.252 14:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.252 14:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.252 14:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.252 [2024-11-20 14:22:14.096103] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 24df2142-4617-4f40-b64b-a81e8952781f '!=' 24df2142-4617-4f40-b64b-a81e8952781f ']' 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.252 [2024-11-20 14:22:14.143791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.252 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.253 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.253 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.253 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.253 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.253 "name": "raid_bdev1", 00:11:35.253 "uuid": "24df2142-4617-4f40-b64b-a81e8952781f", 00:11:35.253 "strip_size_kb": 0, 00:11:35.253 "state": "online", 00:11:35.253 "raid_level": "raid1", 00:11:35.253 "superblock": true, 00:11:35.253 "num_base_bdevs": 3, 00:11:35.253 "num_base_bdevs_discovered": 2, 00:11:35.253 "num_base_bdevs_operational": 2, 00:11:35.253 "base_bdevs_list": [ 00:11:35.253 { 00:11:35.253 "name": null, 00:11:35.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.253 "is_configured": false, 00:11:35.253 "data_offset": 0, 00:11:35.253 "data_size": 63488 00:11:35.253 }, 00:11:35.253 { 00:11:35.253 "name": "pt2", 00:11:35.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.253 "is_configured": true, 00:11:35.253 "data_offset": 2048, 00:11:35.253 "data_size": 63488 00:11:35.253 }, 00:11:35.253 { 00:11:35.253 "name": "pt3", 00:11:35.253 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.253 "is_configured": true, 00:11:35.253 "data_offset": 2048, 00:11:35.253 "data_size": 63488 00:11:35.253 } 00:11:35.253 ] 00:11:35.253 }' 00:11:35.253 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.253 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.821 [2024-11-20 14:22:14.655904] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:35.821 [2024-11-20 14:22:14.656083] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.821 [2024-11-20 14:22:14.656215] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.821 [2024-11-20 14:22:14.656296] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.821 [2024-11-20 14:22:14.656320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:35.821 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:35.822 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:35.822 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.822 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.822 [2024-11-20 14:22:14.735864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:35.822 [2024-11-20 14:22:14.735945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.822 [2024-11-20 14:22:14.735969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:35.822 [2024-11-20 14:22:14.736018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.822 [2024-11-20 14:22:14.738967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.822 [2024-11-20 14:22:14.739062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:35.822 [2024-11-20 14:22:14.739186] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:35.822 [2024-11-20 14:22:14.739253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:35.822 pt2 00:11:35.822 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.822 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:35.822 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.822 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.822 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.822 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.822 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:35.822 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.822 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.822 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.822 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.822 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.822 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.822 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.822 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.822 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.822 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.822 "name": "raid_bdev1", 00:11:35.822 "uuid": "24df2142-4617-4f40-b64b-a81e8952781f", 00:11:35.822 "strip_size_kb": 0, 00:11:35.822 "state": "configuring", 00:11:35.822 "raid_level": "raid1", 00:11:35.822 "superblock": true, 00:11:35.822 "num_base_bdevs": 3, 00:11:35.822 "num_base_bdevs_discovered": 1, 00:11:35.822 "num_base_bdevs_operational": 2, 00:11:35.822 "base_bdevs_list": [ 00:11:35.822 { 00:11:35.822 "name": null, 00:11:35.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.822 "is_configured": false, 00:11:35.822 "data_offset": 2048, 00:11:35.822 "data_size": 63488 00:11:35.822 }, 00:11:35.822 { 00:11:35.822 "name": "pt2", 00:11:35.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.822 "is_configured": true, 00:11:35.822 "data_offset": 2048, 00:11:35.822 "data_size": 63488 00:11:35.822 }, 00:11:35.822 { 00:11:35.822 "name": null, 00:11:35.822 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.822 "is_configured": false, 00:11:35.822 "data_offset": 2048, 00:11:35.822 "data_size": 63488 00:11:35.822 } 00:11:35.822 ] 00:11:35.822 }' 00:11:35.822 14:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.822 14:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.388 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:36.388 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:36.388 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:11:36.388 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:36.388 14:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.388 14:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.388 [2024-11-20 14:22:15.252080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:36.388 [2024-11-20 14:22:15.252162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.388 [2024-11-20 14:22:15.252193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:36.389 [2024-11-20 14:22:15.252221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.389 [2024-11-20 14:22:15.252831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.389 [2024-11-20 14:22:15.252880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:36.389 [2024-11-20 14:22:15.253009] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:36.389 [2024-11-20 14:22:15.253054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:36.389 [2024-11-20 14:22:15.253212] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:36.389 [2024-11-20 14:22:15.253234] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:36.389 [2024-11-20 14:22:15.253571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:36.389 [2024-11-20 14:22:15.253776] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:36.389 [2024-11-20 14:22:15.253793] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:36.389 [2024-11-20 14:22:15.253964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.389 pt3 00:11:36.389 14:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.389 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:36.389 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.389 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.389 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.389 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.389 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:36.389 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.389 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.389 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.389 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.389 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.389 14:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.389 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.389 14:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.389 14:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.389 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.389 "name": "raid_bdev1", 00:11:36.389 "uuid": "24df2142-4617-4f40-b64b-a81e8952781f", 00:11:36.389 "strip_size_kb": 0, 00:11:36.389 "state": "online", 00:11:36.389 "raid_level": "raid1", 00:11:36.389 "superblock": true, 00:11:36.389 "num_base_bdevs": 3, 00:11:36.389 "num_base_bdevs_discovered": 2, 00:11:36.389 "num_base_bdevs_operational": 2, 00:11:36.389 "base_bdevs_list": [ 00:11:36.389 { 00:11:36.389 "name": null, 00:11:36.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.389 "is_configured": false, 00:11:36.389 "data_offset": 2048, 00:11:36.389 "data_size": 63488 00:11:36.389 }, 00:11:36.389 { 00:11:36.389 "name": "pt2", 00:11:36.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.389 "is_configured": true, 00:11:36.389 "data_offset": 2048, 00:11:36.389 "data_size": 63488 00:11:36.389 }, 00:11:36.389 { 00:11:36.389 "name": "pt3", 00:11:36.389 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.389 "is_configured": true, 00:11:36.389 "data_offset": 2048, 00:11:36.389 "data_size": 63488 00:11:36.389 } 00:11:36.389 ] 00:11:36.389 }' 00:11:36.389 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.389 14:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.958 [2024-11-20 14:22:15.788209] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:36.958 [2024-11-20 14:22:15.788385] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.958 [2024-11-20 14:22:15.788514] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.958 [2024-11-20 14:22:15.788598] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.958 [2024-11-20 14:22:15.788614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.958 [2024-11-20 14:22:15.860237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:36.958 [2024-11-20 14:22:15.860312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.958 [2024-11-20 14:22:15.860341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:36.958 [2024-11-20 14:22:15.860355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.958 [2024-11-20 14:22:15.863373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.958 [2024-11-20 14:22:15.863612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:36.958 [2024-11-20 14:22:15.863729] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:36.958 [2024-11-20 14:22:15.863793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:36.958 [2024-11-20 14:22:15.863968] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:36.958 [2024-11-20 14:22:15.864010] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:36.958 [2024-11-20 14:22:15.864036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:36.958 [2024-11-20 14:22:15.864108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:36.958 pt1 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.958 "name": "raid_bdev1", 00:11:36.958 "uuid": "24df2142-4617-4f40-b64b-a81e8952781f", 00:11:36.958 "strip_size_kb": 0, 00:11:36.958 "state": "configuring", 00:11:36.958 "raid_level": "raid1", 00:11:36.958 "superblock": true, 00:11:36.958 "num_base_bdevs": 3, 00:11:36.958 "num_base_bdevs_discovered": 1, 00:11:36.958 "num_base_bdevs_operational": 2, 00:11:36.958 "base_bdevs_list": [ 00:11:36.958 { 00:11:36.958 "name": null, 00:11:36.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.958 "is_configured": false, 00:11:36.958 "data_offset": 2048, 00:11:36.958 "data_size": 63488 00:11:36.958 }, 00:11:36.958 { 00:11:36.958 "name": "pt2", 00:11:36.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.958 "is_configured": true, 00:11:36.958 "data_offset": 2048, 00:11:36.958 "data_size": 63488 00:11:36.958 }, 00:11:36.958 { 00:11:36.958 "name": null, 00:11:36.958 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.958 "is_configured": false, 00:11:36.958 "data_offset": 2048, 00:11:36.958 "data_size": 63488 00:11:36.958 } 00:11:36.958 ] 00:11:36.958 }' 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.958 14:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.526 14:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:37.526 14:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:37.526 14:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.526 14:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.526 14:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.526 14:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:37.526 14:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:37.526 14:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.526 14:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.526 [2024-11-20 14:22:16.464486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:37.526 [2024-11-20 14:22:16.464583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.526 [2024-11-20 14:22:16.464617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:37.526 [2024-11-20 14:22:16.464636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.526 [2024-11-20 14:22:16.465263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.526 [2024-11-20 14:22:16.465301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:37.526 [2024-11-20 14:22:16.465414] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:37.526 [2024-11-20 14:22:16.465453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:37.526 [2024-11-20 14:22:16.465610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:37.526 [2024-11-20 14:22:16.465633] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:37.526 [2024-11-20 14:22:16.465957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:37.526 [2024-11-20 14:22:16.466178] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:37.526 [2024-11-20 14:22:16.466205] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:37.526 [2024-11-20 14:22:16.466373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.526 pt3 00:11:37.526 14:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.526 14:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:37.526 14:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.526 14:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.526 14:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.526 14:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.526 14:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:37.526 14:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.527 14:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.527 14:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.527 14:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.527 14:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.527 14:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.527 14:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.527 14:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.527 14:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.786 14:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.786 "name": "raid_bdev1", 00:11:37.786 "uuid": "24df2142-4617-4f40-b64b-a81e8952781f", 00:11:37.786 "strip_size_kb": 0, 00:11:37.786 "state": "online", 00:11:37.786 "raid_level": "raid1", 00:11:37.786 "superblock": true, 00:11:37.786 "num_base_bdevs": 3, 00:11:37.786 "num_base_bdevs_discovered": 2, 00:11:37.786 "num_base_bdevs_operational": 2, 00:11:37.786 "base_bdevs_list": [ 00:11:37.786 { 00:11:37.786 "name": null, 00:11:37.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.786 "is_configured": false, 00:11:37.786 "data_offset": 2048, 00:11:37.786 "data_size": 63488 00:11:37.786 }, 00:11:37.786 { 00:11:37.786 "name": "pt2", 00:11:37.786 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.786 "is_configured": true, 00:11:37.786 "data_offset": 2048, 00:11:37.786 "data_size": 63488 00:11:37.786 }, 00:11:37.786 { 00:11:37.786 "name": "pt3", 00:11:37.786 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:37.786 "is_configured": true, 00:11:37.786 "data_offset": 2048, 00:11:37.786 "data_size": 63488 00:11:37.786 } 00:11:37.786 ] 00:11:37.786 }' 00:11:37.786 14:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.786 14:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.069 14:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:38.069 14:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:38.069 14:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.069 14:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.069 14:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.069 14:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:38.069 14:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:38.069 14:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:38.069 14:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.069 14:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.069 [2024-11-20 14:22:17.041061] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.328 14:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.328 14:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 24df2142-4617-4f40-b64b-a81e8952781f '!=' 24df2142-4617-4f40-b64b-a81e8952781f ']' 00:11:38.328 14:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68729 00:11:38.328 14:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68729 ']' 00:11:38.328 14:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68729 00:11:38.328 14:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:38.328 14:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.328 14:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68729 00:11:38.328 killing process with pid 68729 00:11:38.328 14:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:38.328 14:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:38.328 14:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68729' 00:11:38.328 14:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68729 00:11:38.328 [2024-11-20 14:22:17.117812] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:38.328 14:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68729 00:11:38.328 [2024-11-20 14:22:17.117918] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.328 [2024-11-20 14:22:17.117995] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.328 [2024-11-20 14:22:17.118049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:38.587 [2024-11-20 14:22:17.376667] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:39.522 ************************************ 00:11:39.522 END TEST raid_superblock_test 00:11:39.522 ************************************ 00:11:39.522 14:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:39.523 00:11:39.523 real 0m8.609s 00:11:39.523 user 0m14.143s 00:11:39.523 sys 0m1.156s 00:11:39.523 14:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.523 14:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.523 14:22:18 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:11:39.523 14:22:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:39.523 14:22:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.523 14:22:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:39.523 ************************************ 00:11:39.523 START TEST raid_read_error_test 00:11:39.523 ************************************ 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.owyuRK41PM 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69180 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69180 00:11:39.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69180 ']' 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.523 14:22:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.782 [2024-11-20 14:22:18.597230] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:11:39.782 [2024-11-20 14:22:18.597591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69180 ] 00:11:40.040 [2024-11-20 14:22:18.771431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.040 [2024-11-20 14:22:18.898154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.299 [2024-11-20 14:22:19.103083] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.299 [2024-11-20 14:22:19.103204] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.867 BaseBdev1_malloc 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.867 true 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.867 [2024-11-20 14:22:19.612494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:40.867 [2024-11-20 14:22:19.612579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.867 [2024-11-20 14:22:19.612610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:40.867 [2024-11-20 14:22:19.612628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.867 [2024-11-20 14:22:19.615543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.867 [2024-11-20 14:22:19.615613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:40.867 BaseBdev1 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.867 BaseBdev2_malloc 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.867 true 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.867 [2024-11-20 14:22:19.672936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:40.867 [2024-11-20 14:22:19.673040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.867 [2024-11-20 14:22:19.673069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:40.867 [2024-11-20 14:22:19.673087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.867 [2024-11-20 14:22:19.675950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.867 [2024-11-20 14:22:19.676030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:40.867 BaseBdev2 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.867 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.867 BaseBdev3_malloc 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.868 true 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.868 [2024-11-20 14:22:19.742651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:40.868 [2024-11-20 14:22:19.742723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.868 [2024-11-20 14:22:19.742751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:40.868 [2024-11-20 14:22:19.742769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.868 [2024-11-20 14:22:19.745640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.868 [2024-11-20 14:22:19.745693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:40.868 BaseBdev3 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.868 [2024-11-20 14:22:19.754743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:40.868 [2024-11-20 14:22:19.757380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:40.868 [2024-11-20 14:22:19.757489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:40.868 [2024-11-20 14:22:19.757778] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:40.868 [2024-11-20 14:22:19.757799] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:40.868 [2024-11-20 14:22:19.758281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:40.868 [2024-11-20 14:22:19.758677] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:40.868 [2024-11-20 14:22:19.758814] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:40.868 [2024-11-20 14:22:19.759227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.868 "name": "raid_bdev1", 00:11:40.868 "uuid": "030f529a-b2dc-4b9b-b294-133edcb123d1", 00:11:40.868 "strip_size_kb": 0, 00:11:40.868 "state": "online", 00:11:40.868 "raid_level": "raid1", 00:11:40.868 "superblock": true, 00:11:40.868 "num_base_bdevs": 3, 00:11:40.868 "num_base_bdevs_discovered": 3, 00:11:40.868 "num_base_bdevs_operational": 3, 00:11:40.868 "base_bdevs_list": [ 00:11:40.868 { 00:11:40.868 "name": "BaseBdev1", 00:11:40.868 "uuid": "7b9898fd-88c7-5f94-8545-6dbde5c184b9", 00:11:40.868 "is_configured": true, 00:11:40.868 "data_offset": 2048, 00:11:40.868 "data_size": 63488 00:11:40.868 }, 00:11:40.868 { 00:11:40.868 "name": "BaseBdev2", 00:11:40.868 "uuid": "4aed3987-e4b3-5638-8b43-d9ff8b02a920", 00:11:40.868 "is_configured": true, 00:11:40.868 "data_offset": 2048, 00:11:40.868 "data_size": 63488 00:11:40.868 }, 00:11:40.868 { 00:11:40.868 "name": "BaseBdev3", 00:11:40.868 "uuid": "cda453d7-53cb-57dd-a5d8-7fadc2b4f491", 00:11:40.868 "is_configured": true, 00:11:40.868 "data_offset": 2048, 00:11:40.868 "data_size": 63488 00:11:40.868 } 00:11:40.868 ] 00:11:40.868 }' 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.868 14:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.436 14:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:41.436 14:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:41.436 [2024-11-20 14:22:20.380758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.458 "name": "raid_bdev1", 00:11:42.458 "uuid": "030f529a-b2dc-4b9b-b294-133edcb123d1", 00:11:42.458 "strip_size_kb": 0, 00:11:42.458 "state": "online", 00:11:42.458 "raid_level": "raid1", 00:11:42.458 "superblock": true, 00:11:42.458 "num_base_bdevs": 3, 00:11:42.458 "num_base_bdevs_discovered": 3, 00:11:42.458 "num_base_bdevs_operational": 3, 00:11:42.458 "base_bdevs_list": [ 00:11:42.458 { 00:11:42.458 "name": "BaseBdev1", 00:11:42.458 "uuid": "7b9898fd-88c7-5f94-8545-6dbde5c184b9", 00:11:42.458 "is_configured": true, 00:11:42.458 "data_offset": 2048, 00:11:42.458 "data_size": 63488 00:11:42.458 }, 00:11:42.458 { 00:11:42.458 "name": "BaseBdev2", 00:11:42.458 "uuid": "4aed3987-e4b3-5638-8b43-d9ff8b02a920", 00:11:42.458 "is_configured": true, 00:11:42.458 "data_offset": 2048, 00:11:42.458 "data_size": 63488 00:11:42.458 }, 00:11:42.458 { 00:11:42.458 "name": "BaseBdev3", 00:11:42.458 "uuid": "cda453d7-53cb-57dd-a5d8-7fadc2b4f491", 00:11:42.458 "is_configured": true, 00:11:42.458 "data_offset": 2048, 00:11:42.458 "data_size": 63488 00:11:42.458 } 00:11:42.458 ] 00:11:42.458 }' 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.458 14:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.026 14:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:43.026 14:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.026 14:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.026 [2024-11-20 14:22:21.842548] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:43.026 [2024-11-20 14:22:21.842736] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.026 [2024-11-20 14:22:21.846344] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.026 [2024-11-20 14:22:21.846565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.026 { 00:11:43.026 "results": [ 00:11:43.026 { 00:11:43.026 "job": "raid_bdev1", 00:11:43.026 "core_mask": "0x1", 00:11:43.026 "workload": "randrw", 00:11:43.026 "percentage": 50, 00:11:43.026 "status": "finished", 00:11:43.026 "queue_depth": 1, 00:11:43.026 "io_size": 131072, 00:11:43.026 "runtime": 1.459707, 00:11:43.026 "iops": 9160.05746358687, 00:11:43.026 "mibps": 1145.0071829483588, 00:11:43.026 "io_failed": 0, 00:11:43.026 "io_timeout": 0, 00:11:43.026 "avg_latency_us": 104.6753873035946, 00:11:43.026 "min_latency_us": 40.96, 00:11:43.026 "max_latency_us": 1794.7927272727272 00:11:43.026 } 00:11:43.026 ], 00:11:43.026 "core_count": 1 00:11:43.026 } 00:11:43.026 [2024-11-20 14:22:21.846817] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.026 [2024-11-20 14:22:21.846843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:43.026 14:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.026 14:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69180 00:11:43.026 14:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69180 ']' 00:11:43.026 14:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69180 00:11:43.026 14:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:43.026 14:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.026 14:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69180 00:11:43.026 14:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.026 14:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.026 14:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69180' 00:11:43.026 killing process with pid 69180 00:11:43.027 14:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69180 00:11:43.027 [2024-11-20 14:22:21.889317] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:43.027 14:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69180 00:11:43.285 [2024-11-20 14:22:22.092537] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:44.663 14:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.owyuRK41PM 00:11:44.663 14:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:44.663 14:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:44.663 14:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:44.663 14:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:44.663 ************************************ 00:11:44.663 END TEST raid_read_error_test 00:11:44.663 ************************************ 00:11:44.663 14:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:44.663 14:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:44.663 14:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:44.663 00:11:44.663 real 0m4.723s 00:11:44.663 user 0m5.824s 00:11:44.663 sys 0m0.591s 00:11:44.663 14:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.663 14:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.663 14:22:23 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:11:44.663 14:22:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:44.663 14:22:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.663 14:22:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:44.663 ************************************ 00:11:44.663 START TEST raid_write_error_test 00:11:44.663 ************************************ 00:11:44.663 14:22:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:11:44.663 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:44.663 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:44.663 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:44.663 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:44.663 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.663 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:44.663 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.663 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.663 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:44.663 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.663 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.664 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:44.664 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.664 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.664 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:44.664 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:44.664 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:44.664 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:44.664 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:44.664 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:44.664 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:44.664 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:44.664 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:44.664 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:44.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.664 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.o9rWCnWDxj 00:11:44.664 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69326 00:11:44.664 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:44.664 14:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69326 00:11:44.664 14:22:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69326 ']' 00:11:44.664 14:22:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.664 14:22:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.664 14:22:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.664 14:22:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.664 14:22:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.664 [2024-11-20 14:22:23.385892] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:11:44.664 [2024-11-20 14:22:23.386317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69326 ] 00:11:44.664 [2024-11-20 14:22:23.565128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.965 [2024-11-20 14:22:23.696851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.965 [2024-11-20 14:22:23.901538] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.965 [2024-11-20 14:22:23.901825] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.535 BaseBdev1_malloc 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.535 true 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.535 [2024-11-20 14:22:24.400791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:45.535 [2024-11-20 14:22:24.401033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.535 [2024-11-20 14:22:24.401076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:45.535 [2024-11-20 14:22:24.401096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.535 [2024-11-20 14:22:24.403887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.535 [2024-11-20 14:22:24.403943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:45.535 BaseBdev1 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.535 BaseBdev2_malloc 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.535 true 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.535 [2024-11-20 14:22:24.456963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:45.535 [2024-11-20 14:22:24.457051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.535 [2024-11-20 14:22:24.457078] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:45.535 [2024-11-20 14:22:24.457097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.535 [2024-11-20 14:22:24.459854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.535 [2024-11-20 14:22:24.460073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:45.535 BaseBdev2 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.535 BaseBdev3_malloc 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.535 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.793 true 00:11:45.793 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.793 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:45.793 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.793 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.793 [2024-11-20 14:22:24.525424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:45.793 [2024-11-20 14:22:24.525496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.793 [2024-11-20 14:22:24.525526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:45.793 [2024-11-20 14:22:24.525545] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.793 [2024-11-20 14:22:24.528367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.794 [2024-11-20 14:22:24.528421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:45.794 BaseBdev3 00:11:45.794 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.794 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:45.794 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.794 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.794 [2024-11-20 14:22:24.533503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.794 [2024-11-20 14:22:24.535931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:45.794 [2024-11-20 14:22:24.536225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:45.794 [2024-11-20 14:22:24.536530] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:45.794 [2024-11-20 14:22:24.536551] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:45.794 [2024-11-20 14:22:24.536861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:45.794 [2024-11-20 14:22:24.537108] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:45.794 [2024-11-20 14:22:24.537130] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:45.794 [2024-11-20 14:22:24.537318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.794 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.794 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:45.794 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.794 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.794 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.794 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.794 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.794 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.794 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.794 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.794 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.794 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.794 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.794 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.794 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.794 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.794 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.794 "name": "raid_bdev1", 00:11:45.794 "uuid": "5f7d67cf-7858-4bca-8941-8ba5b27fe1a6", 00:11:45.794 "strip_size_kb": 0, 00:11:45.794 "state": "online", 00:11:45.794 "raid_level": "raid1", 00:11:45.794 "superblock": true, 00:11:45.794 "num_base_bdevs": 3, 00:11:45.794 "num_base_bdevs_discovered": 3, 00:11:45.794 "num_base_bdevs_operational": 3, 00:11:45.794 "base_bdevs_list": [ 00:11:45.794 { 00:11:45.794 "name": "BaseBdev1", 00:11:45.794 "uuid": "dfb469f4-435c-5de2-b570-9da8f7844177", 00:11:45.794 "is_configured": true, 00:11:45.794 "data_offset": 2048, 00:11:45.794 "data_size": 63488 00:11:45.794 }, 00:11:45.794 { 00:11:45.794 "name": "BaseBdev2", 00:11:45.794 "uuid": "08ecf9be-986f-5363-95aa-57f2c7d7094b", 00:11:45.794 "is_configured": true, 00:11:45.794 "data_offset": 2048, 00:11:45.794 "data_size": 63488 00:11:45.794 }, 00:11:45.794 { 00:11:45.794 "name": "BaseBdev3", 00:11:45.794 "uuid": "6aee5f8d-5c11-5618-aa11-bf013ff486c2", 00:11:45.794 "is_configured": true, 00:11:45.794 "data_offset": 2048, 00:11:45.794 "data_size": 63488 00:11:45.794 } 00:11:45.794 ] 00:11:45.794 }' 00:11:45.794 14:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.794 14:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.055 14:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:46.055 14:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:46.326 [2024-11-20 14:22:25.119103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.260 [2024-11-20 14:22:26.024348] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:47.260 [2024-11-20 14:22:26.024425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:47.260 [2024-11-20 14:22:26.024700] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.260 "name": "raid_bdev1", 00:11:47.260 "uuid": "5f7d67cf-7858-4bca-8941-8ba5b27fe1a6", 00:11:47.260 "strip_size_kb": 0, 00:11:47.260 "state": "online", 00:11:47.260 "raid_level": "raid1", 00:11:47.260 "superblock": true, 00:11:47.260 "num_base_bdevs": 3, 00:11:47.260 "num_base_bdevs_discovered": 2, 00:11:47.260 "num_base_bdevs_operational": 2, 00:11:47.260 "base_bdevs_list": [ 00:11:47.260 { 00:11:47.260 "name": null, 00:11:47.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.260 "is_configured": false, 00:11:47.260 "data_offset": 0, 00:11:47.260 "data_size": 63488 00:11:47.260 }, 00:11:47.260 { 00:11:47.260 "name": "BaseBdev2", 00:11:47.260 "uuid": "08ecf9be-986f-5363-95aa-57f2c7d7094b", 00:11:47.260 "is_configured": true, 00:11:47.260 "data_offset": 2048, 00:11:47.260 "data_size": 63488 00:11:47.260 }, 00:11:47.260 { 00:11:47.260 "name": "BaseBdev3", 00:11:47.260 "uuid": "6aee5f8d-5c11-5618-aa11-bf013ff486c2", 00:11:47.260 "is_configured": true, 00:11:47.260 "data_offset": 2048, 00:11:47.260 "data_size": 63488 00:11:47.260 } 00:11:47.260 ] 00:11:47.260 }' 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.260 14:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.827 14:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:47.827 14:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.827 14:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.827 [2024-11-20 14:22:26.553668] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:47.827 [2024-11-20 14:22:26.553707] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.827 [2024-11-20 14:22:26.557147] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.827 [2024-11-20 14:22:26.557224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.827 [2024-11-20 14:22:26.557326] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.827 [2024-11-20 14:22:26.557346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:47.827 { 00:11:47.827 "results": [ 00:11:47.827 { 00:11:47.827 "job": "raid_bdev1", 00:11:47.827 "core_mask": "0x1", 00:11:47.827 "workload": "randrw", 00:11:47.827 "percentage": 50, 00:11:47.827 "status": "finished", 00:11:47.827 "queue_depth": 1, 00:11:47.827 "io_size": 131072, 00:11:47.827 "runtime": 1.431973, 00:11:47.827 "iops": 10516.958071136816, 00:11:47.827 "mibps": 1314.619758892102, 00:11:47.827 "io_failed": 0, 00:11:47.827 "io_timeout": 0, 00:11:47.827 "avg_latency_us": 90.67921477725463, 00:11:47.827 "min_latency_us": 41.192727272727275, 00:11:47.827 "max_latency_us": 1846.9236363636364 00:11:47.827 } 00:11:47.827 ], 00:11:47.827 "core_count": 1 00:11:47.827 } 00:11:47.827 14:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.827 14:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69326 00:11:47.827 14:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69326 ']' 00:11:47.827 14:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69326 00:11:47.827 14:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:47.827 14:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.827 14:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69326 00:11:47.827 killing process with pid 69326 00:11:47.827 14:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:47.827 14:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:47.827 14:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69326' 00:11:47.827 14:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69326 00:11:47.827 [2024-11-20 14:22:26.595321] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:47.827 14:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69326 00:11:47.827 [2024-11-20 14:22:26.796339] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:49.199 14:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:49.199 14:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.o9rWCnWDxj 00:11:49.199 14:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:49.199 14:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:49.199 14:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:49.199 14:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:49.199 14:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:49.199 14:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:49.199 00:11:49.199 real 0m4.637s 00:11:49.199 user 0m5.684s 00:11:49.199 sys 0m0.597s 00:11:49.199 14:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.199 ************************************ 00:11:49.199 END TEST raid_write_error_test 00:11:49.199 ************************************ 00:11:49.199 14:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.199 14:22:27 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:49.199 14:22:27 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:49.199 14:22:27 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:11:49.199 14:22:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:49.199 14:22:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.199 14:22:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:49.199 ************************************ 00:11:49.199 START TEST raid_state_function_test 00:11:49.199 ************************************ 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:49.199 Process raid pid: 69470 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69470 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69470' 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69470 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69470 ']' 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:49.199 14:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.199 [2024-11-20 14:22:28.076509] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:11:49.200 [2024-11-20 14:22:28.076865] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.458 [2024-11-20 14:22:28.264872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.458 [2024-11-20 14:22:28.395705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.716 [2024-11-20 14:22:28.604772] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.716 [2024-11-20 14:22:28.604826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.297 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.297 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:50.297 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:50.297 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.297 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.297 [2024-11-20 14:22:29.060108] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:50.297 [2024-11-20 14:22:29.060185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:50.297 [2024-11-20 14:22:29.060204] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:50.297 [2024-11-20 14:22:29.060221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:50.297 [2024-11-20 14:22:29.060231] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:50.297 [2024-11-20 14:22:29.060246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:50.297 [2024-11-20 14:22:29.060255] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:50.297 [2024-11-20 14:22:29.060269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:50.297 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.297 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:50.297 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.297 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.297 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:50.297 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.297 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.297 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.297 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.297 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.297 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.297 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.297 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.297 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.297 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.297 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.297 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.297 "name": "Existed_Raid", 00:11:50.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.297 "strip_size_kb": 64, 00:11:50.297 "state": "configuring", 00:11:50.297 "raid_level": "raid0", 00:11:50.297 "superblock": false, 00:11:50.297 "num_base_bdevs": 4, 00:11:50.297 "num_base_bdevs_discovered": 0, 00:11:50.297 "num_base_bdevs_operational": 4, 00:11:50.297 "base_bdevs_list": [ 00:11:50.297 { 00:11:50.297 "name": "BaseBdev1", 00:11:50.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.297 "is_configured": false, 00:11:50.297 "data_offset": 0, 00:11:50.297 "data_size": 0 00:11:50.297 }, 00:11:50.297 { 00:11:50.297 "name": "BaseBdev2", 00:11:50.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.297 "is_configured": false, 00:11:50.297 "data_offset": 0, 00:11:50.297 "data_size": 0 00:11:50.297 }, 00:11:50.297 { 00:11:50.297 "name": "BaseBdev3", 00:11:50.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.297 "is_configured": false, 00:11:50.297 "data_offset": 0, 00:11:50.297 "data_size": 0 00:11:50.297 }, 00:11:50.297 { 00:11:50.297 "name": "BaseBdev4", 00:11:50.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.297 "is_configured": false, 00:11:50.297 "data_offset": 0, 00:11:50.297 "data_size": 0 00:11:50.297 } 00:11:50.297 ] 00:11:50.297 }' 00:11:50.297 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.297 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.890 [2024-11-20 14:22:29.576172] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:50.890 [2024-11-20 14:22:29.576218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.890 [2024-11-20 14:22:29.584169] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:50.890 [2024-11-20 14:22:29.584224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:50.890 [2024-11-20 14:22:29.584239] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:50.890 [2024-11-20 14:22:29.584256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:50.890 [2024-11-20 14:22:29.584265] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:50.890 [2024-11-20 14:22:29.584280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:50.890 [2024-11-20 14:22:29.584290] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:50.890 [2024-11-20 14:22:29.584304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.890 [2024-11-20 14:22:29.629390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.890 BaseBdev1 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.890 [ 00:11:50.890 { 00:11:50.890 "name": "BaseBdev1", 00:11:50.890 "aliases": [ 00:11:50.890 "18356257-6ac3-47e8-969e-067ffad47b59" 00:11:50.890 ], 00:11:50.890 "product_name": "Malloc disk", 00:11:50.890 "block_size": 512, 00:11:50.890 "num_blocks": 65536, 00:11:50.890 "uuid": "18356257-6ac3-47e8-969e-067ffad47b59", 00:11:50.890 "assigned_rate_limits": { 00:11:50.890 "rw_ios_per_sec": 0, 00:11:50.890 "rw_mbytes_per_sec": 0, 00:11:50.890 "r_mbytes_per_sec": 0, 00:11:50.890 "w_mbytes_per_sec": 0 00:11:50.890 }, 00:11:50.890 "claimed": true, 00:11:50.890 "claim_type": "exclusive_write", 00:11:50.890 "zoned": false, 00:11:50.890 "supported_io_types": { 00:11:50.890 "read": true, 00:11:50.890 "write": true, 00:11:50.890 "unmap": true, 00:11:50.890 "flush": true, 00:11:50.890 "reset": true, 00:11:50.890 "nvme_admin": false, 00:11:50.890 "nvme_io": false, 00:11:50.890 "nvme_io_md": false, 00:11:50.890 "write_zeroes": true, 00:11:50.890 "zcopy": true, 00:11:50.890 "get_zone_info": false, 00:11:50.890 "zone_management": false, 00:11:50.890 "zone_append": false, 00:11:50.890 "compare": false, 00:11:50.890 "compare_and_write": false, 00:11:50.890 "abort": true, 00:11:50.890 "seek_hole": false, 00:11:50.890 "seek_data": false, 00:11:50.890 "copy": true, 00:11:50.890 "nvme_iov_md": false 00:11:50.890 }, 00:11:50.890 "memory_domains": [ 00:11:50.890 { 00:11:50.890 "dma_device_id": "system", 00:11:50.890 "dma_device_type": 1 00:11:50.890 }, 00:11:50.890 { 00:11:50.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.890 "dma_device_type": 2 00:11:50.890 } 00:11:50.890 ], 00:11:50.890 "driver_specific": {} 00:11:50.890 } 00:11:50.890 ] 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.890 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.891 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.891 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.891 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.891 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.891 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.891 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.891 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.891 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.891 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.891 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.891 "name": "Existed_Raid", 00:11:50.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.891 "strip_size_kb": 64, 00:11:50.891 "state": "configuring", 00:11:50.891 "raid_level": "raid0", 00:11:50.891 "superblock": false, 00:11:50.891 "num_base_bdevs": 4, 00:11:50.891 "num_base_bdevs_discovered": 1, 00:11:50.891 "num_base_bdevs_operational": 4, 00:11:50.891 "base_bdevs_list": [ 00:11:50.891 { 00:11:50.891 "name": "BaseBdev1", 00:11:50.891 "uuid": "18356257-6ac3-47e8-969e-067ffad47b59", 00:11:50.891 "is_configured": true, 00:11:50.891 "data_offset": 0, 00:11:50.891 "data_size": 65536 00:11:50.891 }, 00:11:50.891 { 00:11:50.891 "name": "BaseBdev2", 00:11:50.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.891 "is_configured": false, 00:11:50.891 "data_offset": 0, 00:11:50.891 "data_size": 0 00:11:50.891 }, 00:11:50.891 { 00:11:50.891 "name": "BaseBdev3", 00:11:50.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.891 "is_configured": false, 00:11:50.891 "data_offset": 0, 00:11:50.891 "data_size": 0 00:11:50.891 }, 00:11:50.891 { 00:11:50.891 "name": "BaseBdev4", 00:11:50.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.891 "is_configured": false, 00:11:50.891 "data_offset": 0, 00:11:50.891 "data_size": 0 00:11:50.891 } 00:11:50.891 ] 00:11:50.891 }' 00:11:50.891 14:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.891 14:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.457 [2024-11-20 14:22:30.169581] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:51.457 [2024-11-20 14:22:30.169642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.457 [2024-11-20 14:22:30.177627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.457 [2024-11-20 14:22:30.180204] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:51.457 [2024-11-20 14:22:30.180261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:51.457 [2024-11-20 14:22:30.180278] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:51.457 [2024-11-20 14:22:30.180297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:51.457 [2024-11-20 14:22:30.180307] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:51.457 [2024-11-20 14:22:30.180321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.457 "name": "Existed_Raid", 00:11:51.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.457 "strip_size_kb": 64, 00:11:51.457 "state": "configuring", 00:11:51.457 "raid_level": "raid0", 00:11:51.457 "superblock": false, 00:11:51.457 "num_base_bdevs": 4, 00:11:51.457 "num_base_bdevs_discovered": 1, 00:11:51.457 "num_base_bdevs_operational": 4, 00:11:51.457 "base_bdevs_list": [ 00:11:51.457 { 00:11:51.457 "name": "BaseBdev1", 00:11:51.457 "uuid": "18356257-6ac3-47e8-969e-067ffad47b59", 00:11:51.457 "is_configured": true, 00:11:51.457 "data_offset": 0, 00:11:51.457 "data_size": 65536 00:11:51.457 }, 00:11:51.457 { 00:11:51.457 "name": "BaseBdev2", 00:11:51.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.457 "is_configured": false, 00:11:51.457 "data_offset": 0, 00:11:51.457 "data_size": 0 00:11:51.457 }, 00:11:51.457 { 00:11:51.457 "name": "BaseBdev3", 00:11:51.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.457 "is_configured": false, 00:11:51.457 "data_offset": 0, 00:11:51.457 "data_size": 0 00:11:51.457 }, 00:11:51.457 { 00:11:51.457 "name": "BaseBdev4", 00:11:51.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.457 "is_configured": false, 00:11:51.457 "data_offset": 0, 00:11:51.457 "data_size": 0 00:11:51.457 } 00:11:51.457 ] 00:11:51.457 }' 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.457 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.024 [2024-11-20 14:22:30.740191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.024 BaseBdev2 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.024 [ 00:11:52.024 { 00:11:52.024 "name": "BaseBdev2", 00:11:52.024 "aliases": [ 00:11:52.024 "0b7fa03e-d96f-41c6-89eb-158c307f2842" 00:11:52.024 ], 00:11:52.024 "product_name": "Malloc disk", 00:11:52.024 "block_size": 512, 00:11:52.024 "num_blocks": 65536, 00:11:52.024 "uuid": "0b7fa03e-d96f-41c6-89eb-158c307f2842", 00:11:52.024 "assigned_rate_limits": { 00:11:52.024 "rw_ios_per_sec": 0, 00:11:52.024 "rw_mbytes_per_sec": 0, 00:11:52.024 "r_mbytes_per_sec": 0, 00:11:52.024 "w_mbytes_per_sec": 0 00:11:52.024 }, 00:11:52.024 "claimed": true, 00:11:52.024 "claim_type": "exclusive_write", 00:11:52.024 "zoned": false, 00:11:52.024 "supported_io_types": { 00:11:52.024 "read": true, 00:11:52.024 "write": true, 00:11:52.024 "unmap": true, 00:11:52.024 "flush": true, 00:11:52.024 "reset": true, 00:11:52.024 "nvme_admin": false, 00:11:52.024 "nvme_io": false, 00:11:52.024 "nvme_io_md": false, 00:11:52.024 "write_zeroes": true, 00:11:52.024 "zcopy": true, 00:11:52.024 "get_zone_info": false, 00:11:52.024 "zone_management": false, 00:11:52.024 "zone_append": false, 00:11:52.024 "compare": false, 00:11:52.024 "compare_and_write": false, 00:11:52.024 "abort": true, 00:11:52.024 "seek_hole": false, 00:11:52.024 "seek_data": false, 00:11:52.024 "copy": true, 00:11:52.024 "nvme_iov_md": false 00:11:52.024 }, 00:11:52.024 "memory_domains": [ 00:11:52.024 { 00:11:52.024 "dma_device_id": "system", 00:11:52.024 "dma_device_type": 1 00:11:52.024 }, 00:11:52.024 { 00:11:52.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.024 "dma_device_type": 2 00:11:52.024 } 00:11:52.024 ], 00:11:52.024 "driver_specific": {} 00:11:52.024 } 00:11:52.024 ] 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.024 "name": "Existed_Raid", 00:11:52.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.024 "strip_size_kb": 64, 00:11:52.024 "state": "configuring", 00:11:52.024 "raid_level": "raid0", 00:11:52.024 "superblock": false, 00:11:52.024 "num_base_bdevs": 4, 00:11:52.024 "num_base_bdevs_discovered": 2, 00:11:52.024 "num_base_bdevs_operational": 4, 00:11:52.024 "base_bdevs_list": [ 00:11:52.024 { 00:11:52.024 "name": "BaseBdev1", 00:11:52.024 "uuid": "18356257-6ac3-47e8-969e-067ffad47b59", 00:11:52.024 "is_configured": true, 00:11:52.024 "data_offset": 0, 00:11:52.024 "data_size": 65536 00:11:52.024 }, 00:11:52.024 { 00:11:52.024 "name": "BaseBdev2", 00:11:52.024 "uuid": "0b7fa03e-d96f-41c6-89eb-158c307f2842", 00:11:52.024 "is_configured": true, 00:11:52.024 "data_offset": 0, 00:11:52.024 "data_size": 65536 00:11:52.024 }, 00:11:52.024 { 00:11:52.024 "name": "BaseBdev3", 00:11:52.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.024 "is_configured": false, 00:11:52.024 "data_offset": 0, 00:11:52.024 "data_size": 0 00:11:52.024 }, 00:11:52.024 { 00:11:52.024 "name": "BaseBdev4", 00:11:52.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.024 "is_configured": false, 00:11:52.024 "data_offset": 0, 00:11:52.024 "data_size": 0 00:11:52.024 } 00:11:52.024 ] 00:11:52.024 }' 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.024 14:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.591 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:52.591 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.591 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.591 [2024-11-20 14:22:31.332171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:52.591 BaseBdev3 00:11:52.591 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.591 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:52.591 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:52.591 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:52.591 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:52.591 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:52.591 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:52.591 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:52.591 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.592 [ 00:11:52.592 { 00:11:52.592 "name": "BaseBdev3", 00:11:52.592 "aliases": [ 00:11:52.592 "9376554d-2523-4d10-b92d-e45aac7115b1" 00:11:52.592 ], 00:11:52.592 "product_name": "Malloc disk", 00:11:52.592 "block_size": 512, 00:11:52.592 "num_blocks": 65536, 00:11:52.592 "uuid": "9376554d-2523-4d10-b92d-e45aac7115b1", 00:11:52.592 "assigned_rate_limits": { 00:11:52.592 "rw_ios_per_sec": 0, 00:11:52.592 "rw_mbytes_per_sec": 0, 00:11:52.592 "r_mbytes_per_sec": 0, 00:11:52.592 "w_mbytes_per_sec": 0 00:11:52.592 }, 00:11:52.592 "claimed": true, 00:11:52.592 "claim_type": "exclusive_write", 00:11:52.592 "zoned": false, 00:11:52.592 "supported_io_types": { 00:11:52.592 "read": true, 00:11:52.592 "write": true, 00:11:52.592 "unmap": true, 00:11:52.592 "flush": true, 00:11:52.592 "reset": true, 00:11:52.592 "nvme_admin": false, 00:11:52.592 "nvme_io": false, 00:11:52.592 "nvme_io_md": false, 00:11:52.592 "write_zeroes": true, 00:11:52.592 "zcopy": true, 00:11:52.592 "get_zone_info": false, 00:11:52.592 "zone_management": false, 00:11:52.592 "zone_append": false, 00:11:52.592 "compare": false, 00:11:52.592 "compare_and_write": false, 00:11:52.592 "abort": true, 00:11:52.592 "seek_hole": false, 00:11:52.592 "seek_data": false, 00:11:52.592 "copy": true, 00:11:52.592 "nvme_iov_md": false 00:11:52.592 }, 00:11:52.592 "memory_domains": [ 00:11:52.592 { 00:11:52.592 "dma_device_id": "system", 00:11:52.592 "dma_device_type": 1 00:11:52.592 }, 00:11:52.592 { 00:11:52.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.592 "dma_device_type": 2 00:11:52.592 } 00:11:52.592 ], 00:11:52.592 "driver_specific": {} 00:11:52.592 } 00:11:52.592 ] 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.592 "name": "Existed_Raid", 00:11:52.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.592 "strip_size_kb": 64, 00:11:52.592 "state": "configuring", 00:11:52.592 "raid_level": "raid0", 00:11:52.592 "superblock": false, 00:11:52.592 "num_base_bdevs": 4, 00:11:52.592 "num_base_bdevs_discovered": 3, 00:11:52.592 "num_base_bdevs_operational": 4, 00:11:52.592 "base_bdevs_list": [ 00:11:52.592 { 00:11:52.592 "name": "BaseBdev1", 00:11:52.592 "uuid": "18356257-6ac3-47e8-969e-067ffad47b59", 00:11:52.592 "is_configured": true, 00:11:52.592 "data_offset": 0, 00:11:52.592 "data_size": 65536 00:11:52.592 }, 00:11:52.592 { 00:11:52.592 "name": "BaseBdev2", 00:11:52.592 "uuid": "0b7fa03e-d96f-41c6-89eb-158c307f2842", 00:11:52.592 "is_configured": true, 00:11:52.592 "data_offset": 0, 00:11:52.592 "data_size": 65536 00:11:52.592 }, 00:11:52.592 { 00:11:52.592 "name": "BaseBdev3", 00:11:52.592 "uuid": "9376554d-2523-4d10-b92d-e45aac7115b1", 00:11:52.592 "is_configured": true, 00:11:52.592 "data_offset": 0, 00:11:52.592 "data_size": 65536 00:11:52.592 }, 00:11:52.592 { 00:11:52.592 "name": "BaseBdev4", 00:11:52.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.592 "is_configured": false, 00:11:52.592 "data_offset": 0, 00:11:52.592 "data_size": 0 00:11:52.592 } 00:11:52.592 ] 00:11:52.592 }' 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.592 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.157 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:53.157 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.157 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.157 [2024-11-20 14:22:31.907458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:53.157 [2024-11-20 14:22:31.907730] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:53.157 [2024-11-20 14:22:31.907788] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:53.157 [2024-11-20 14:22:31.908278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:53.157 [2024-11-20 14:22:31.908628] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:53.157 [2024-11-20 14:22:31.908778] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:53.157 [2024-11-20 14:22:31.909250] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.157 BaseBdev4 00:11:53.157 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.157 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.158 [ 00:11:53.158 { 00:11:53.158 "name": "BaseBdev4", 00:11:53.158 "aliases": [ 00:11:53.158 "be3eb9e4-d46a-4266-a60e-5b4721593c8a" 00:11:53.158 ], 00:11:53.158 "product_name": "Malloc disk", 00:11:53.158 "block_size": 512, 00:11:53.158 "num_blocks": 65536, 00:11:53.158 "uuid": "be3eb9e4-d46a-4266-a60e-5b4721593c8a", 00:11:53.158 "assigned_rate_limits": { 00:11:53.158 "rw_ios_per_sec": 0, 00:11:53.158 "rw_mbytes_per_sec": 0, 00:11:53.158 "r_mbytes_per_sec": 0, 00:11:53.158 "w_mbytes_per_sec": 0 00:11:53.158 }, 00:11:53.158 "claimed": true, 00:11:53.158 "claim_type": "exclusive_write", 00:11:53.158 "zoned": false, 00:11:53.158 "supported_io_types": { 00:11:53.158 "read": true, 00:11:53.158 "write": true, 00:11:53.158 "unmap": true, 00:11:53.158 "flush": true, 00:11:53.158 "reset": true, 00:11:53.158 "nvme_admin": false, 00:11:53.158 "nvme_io": false, 00:11:53.158 "nvme_io_md": false, 00:11:53.158 "write_zeroes": true, 00:11:53.158 "zcopy": true, 00:11:53.158 "get_zone_info": false, 00:11:53.158 "zone_management": false, 00:11:53.158 "zone_append": false, 00:11:53.158 "compare": false, 00:11:53.158 "compare_and_write": false, 00:11:53.158 "abort": true, 00:11:53.158 "seek_hole": false, 00:11:53.158 "seek_data": false, 00:11:53.158 "copy": true, 00:11:53.158 "nvme_iov_md": false 00:11:53.158 }, 00:11:53.158 "memory_domains": [ 00:11:53.158 { 00:11:53.158 "dma_device_id": "system", 00:11:53.158 "dma_device_type": 1 00:11:53.158 }, 00:11:53.158 { 00:11:53.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.158 "dma_device_type": 2 00:11:53.158 } 00:11:53.158 ], 00:11:53.158 "driver_specific": {} 00:11:53.158 } 00:11:53.158 ] 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.158 "name": "Existed_Raid", 00:11:53.158 "uuid": "038e2f91-1181-4513-89cc-c597bf11971e", 00:11:53.158 "strip_size_kb": 64, 00:11:53.158 "state": "online", 00:11:53.158 "raid_level": "raid0", 00:11:53.158 "superblock": false, 00:11:53.158 "num_base_bdevs": 4, 00:11:53.158 "num_base_bdevs_discovered": 4, 00:11:53.158 "num_base_bdevs_operational": 4, 00:11:53.158 "base_bdevs_list": [ 00:11:53.158 { 00:11:53.158 "name": "BaseBdev1", 00:11:53.158 "uuid": "18356257-6ac3-47e8-969e-067ffad47b59", 00:11:53.158 "is_configured": true, 00:11:53.158 "data_offset": 0, 00:11:53.158 "data_size": 65536 00:11:53.158 }, 00:11:53.158 { 00:11:53.158 "name": "BaseBdev2", 00:11:53.158 "uuid": "0b7fa03e-d96f-41c6-89eb-158c307f2842", 00:11:53.158 "is_configured": true, 00:11:53.158 "data_offset": 0, 00:11:53.158 "data_size": 65536 00:11:53.158 }, 00:11:53.158 { 00:11:53.158 "name": "BaseBdev3", 00:11:53.158 "uuid": "9376554d-2523-4d10-b92d-e45aac7115b1", 00:11:53.158 "is_configured": true, 00:11:53.158 "data_offset": 0, 00:11:53.158 "data_size": 65536 00:11:53.158 }, 00:11:53.158 { 00:11:53.158 "name": "BaseBdev4", 00:11:53.158 "uuid": "be3eb9e4-d46a-4266-a60e-5b4721593c8a", 00:11:53.158 "is_configured": true, 00:11:53.158 "data_offset": 0, 00:11:53.158 "data_size": 65536 00:11:53.158 } 00:11:53.158 ] 00:11:53.158 }' 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.158 14:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.726 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:53.726 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:53.726 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:53.726 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:53.726 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:53.726 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:53.726 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:53.726 14:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.726 14:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.726 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:53.726 [2024-11-20 14:22:32.468110] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:53.726 14:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.726 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:53.726 "name": "Existed_Raid", 00:11:53.726 "aliases": [ 00:11:53.726 "038e2f91-1181-4513-89cc-c597bf11971e" 00:11:53.726 ], 00:11:53.726 "product_name": "Raid Volume", 00:11:53.726 "block_size": 512, 00:11:53.726 "num_blocks": 262144, 00:11:53.726 "uuid": "038e2f91-1181-4513-89cc-c597bf11971e", 00:11:53.726 "assigned_rate_limits": { 00:11:53.726 "rw_ios_per_sec": 0, 00:11:53.726 "rw_mbytes_per_sec": 0, 00:11:53.726 "r_mbytes_per_sec": 0, 00:11:53.726 "w_mbytes_per_sec": 0 00:11:53.726 }, 00:11:53.726 "claimed": false, 00:11:53.726 "zoned": false, 00:11:53.726 "supported_io_types": { 00:11:53.726 "read": true, 00:11:53.726 "write": true, 00:11:53.726 "unmap": true, 00:11:53.726 "flush": true, 00:11:53.726 "reset": true, 00:11:53.726 "nvme_admin": false, 00:11:53.726 "nvme_io": false, 00:11:53.726 "nvme_io_md": false, 00:11:53.726 "write_zeroes": true, 00:11:53.726 "zcopy": false, 00:11:53.726 "get_zone_info": false, 00:11:53.726 "zone_management": false, 00:11:53.726 "zone_append": false, 00:11:53.726 "compare": false, 00:11:53.726 "compare_and_write": false, 00:11:53.726 "abort": false, 00:11:53.726 "seek_hole": false, 00:11:53.726 "seek_data": false, 00:11:53.726 "copy": false, 00:11:53.726 "nvme_iov_md": false 00:11:53.726 }, 00:11:53.726 "memory_domains": [ 00:11:53.726 { 00:11:53.726 "dma_device_id": "system", 00:11:53.726 "dma_device_type": 1 00:11:53.726 }, 00:11:53.726 { 00:11:53.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.726 "dma_device_type": 2 00:11:53.726 }, 00:11:53.726 { 00:11:53.726 "dma_device_id": "system", 00:11:53.726 "dma_device_type": 1 00:11:53.726 }, 00:11:53.726 { 00:11:53.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.726 "dma_device_type": 2 00:11:53.726 }, 00:11:53.726 { 00:11:53.726 "dma_device_id": "system", 00:11:53.726 "dma_device_type": 1 00:11:53.726 }, 00:11:53.726 { 00:11:53.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.726 "dma_device_type": 2 00:11:53.726 }, 00:11:53.726 { 00:11:53.726 "dma_device_id": "system", 00:11:53.726 "dma_device_type": 1 00:11:53.726 }, 00:11:53.726 { 00:11:53.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.726 "dma_device_type": 2 00:11:53.726 } 00:11:53.726 ], 00:11:53.726 "driver_specific": { 00:11:53.726 "raid": { 00:11:53.726 "uuid": "038e2f91-1181-4513-89cc-c597bf11971e", 00:11:53.726 "strip_size_kb": 64, 00:11:53.726 "state": "online", 00:11:53.726 "raid_level": "raid0", 00:11:53.726 "superblock": false, 00:11:53.726 "num_base_bdevs": 4, 00:11:53.726 "num_base_bdevs_discovered": 4, 00:11:53.726 "num_base_bdevs_operational": 4, 00:11:53.726 "base_bdevs_list": [ 00:11:53.726 { 00:11:53.726 "name": "BaseBdev1", 00:11:53.726 "uuid": "18356257-6ac3-47e8-969e-067ffad47b59", 00:11:53.726 "is_configured": true, 00:11:53.726 "data_offset": 0, 00:11:53.726 "data_size": 65536 00:11:53.726 }, 00:11:53.726 { 00:11:53.726 "name": "BaseBdev2", 00:11:53.726 "uuid": "0b7fa03e-d96f-41c6-89eb-158c307f2842", 00:11:53.726 "is_configured": true, 00:11:53.726 "data_offset": 0, 00:11:53.726 "data_size": 65536 00:11:53.726 }, 00:11:53.726 { 00:11:53.726 "name": "BaseBdev3", 00:11:53.726 "uuid": "9376554d-2523-4d10-b92d-e45aac7115b1", 00:11:53.726 "is_configured": true, 00:11:53.726 "data_offset": 0, 00:11:53.726 "data_size": 65536 00:11:53.726 }, 00:11:53.726 { 00:11:53.726 "name": "BaseBdev4", 00:11:53.726 "uuid": "be3eb9e4-d46a-4266-a60e-5b4721593c8a", 00:11:53.726 "is_configured": true, 00:11:53.726 "data_offset": 0, 00:11:53.726 "data_size": 65536 00:11:53.726 } 00:11:53.726 ] 00:11:53.726 } 00:11:53.726 } 00:11:53.726 }' 00:11:53.726 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:53.727 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:53.727 BaseBdev2 00:11:53.727 BaseBdev3 00:11:53.727 BaseBdev4' 00:11:53.727 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.727 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:53.727 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.727 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:53.727 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.727 14:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.727 14:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.727 14:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.727 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.727 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.727 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.727 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.727 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:53.727 14:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.727 14:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.727 14:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.987 [2024-11-20 14:22:32.831806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:53.987 [2024-11-20 14:22:32.831846] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:53.987 [2024-11-20 14:22:32.831911] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.987 14:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.246 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.246 "name": "Existed_Raid", 00:11:54.246 "uuid": "038e2f91-1181-4513-89cc-c597bf11971e", 00:11:54.246 "strip_size_kb": 64, 00:11:54.246 "state": "offline", 00:11:54.246 "raid_level": "raid0", 00:11:54.246 "superblock": false, 00:11:54.246 "num_base_bdevs": 4, 00:11:54.246 "num_base_bdevs_discovered": 3, 00:11:54.246 "num_base_bdevs_operational": 3, 00:11:54.246 "base_bdevs_list": [ 00:11:54.246 { 00:11:54.246 "name": null, 00:11:54.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.246 "is_configured": false, 00:11:54.246 "data_offset": 0, 00:11:54.246 "data_size": 65536 00:11:54.246 }, 00:11:54.246 { 00:11:54.246 "name": "BaseBdev2", 00:11:54.246 "uuid": "0b7fa03e-d96f-41c6-89eb-158c307f2842", 00:11:54.246 "is_configured": true, 00:11:54.246 "data_offset": 0, 00:11:54.246 "data_size": 65536 00:11:54.246 }, 00:11:54.246 { 00:11:54.246 "name": "BaseBdev3", 00:11:54.246 "uuid": "9376554d-2523-4d10-b92d-e45aac7115b1", 00:11:54.246 "is_configured": true, 00:11:54.246 "data_offset": 0, 00:11:54.246 "data_size": 65536 00:11:54.246 }, 00:11:54.246 { 00:11:54.246 "name": "BaseBdev4", 00:11:54.246 "uuid": "be3eb9e4-d46a-4266-a60e-5b4721593c8a", 00:11:54.246 "is_configured": true, 00:11:54.246 "data_offset": 0, 00:11:54.246 "data_size": 65536 00:11:54.246 } 00:11:54.246 ] 00:11:54.246 }' 00:11:54.246 14:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.246 14:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.506 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:54.506 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:54.506 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.506 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:54.506 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.506 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.506 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.506 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:54.506 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:54.506 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:54.506 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.506 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.506 [2024-11-20 14:22:33.460391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:54.765 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.765 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:54.765 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:54.765 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.765 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:54.765 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.765 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.765 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.765 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:54.765 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:54.765 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:54.765 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.765 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.765 [2024-11-20 14:22:33.598071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:54.765 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.765 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:54.765 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:54.765 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.765 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:54.765 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.765 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.766 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.766 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:54.766 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:54.766 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:54.766 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.766 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.766 [2024-11-20 14:22:33.739561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:54.766 [2024-11-20 14:22:33.739621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.025 BaseBdev2 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.025 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.026 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:55.026 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.026 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.026 [ 00:11:55.026 { 00:11:55.026 "name": "BaseBdev2", 00:11:55.026 "aliases": [ 00:11:55.026 "2a434cd5-ca20-450b-86f5-709f629c5b24" 00:11:55.026 ], 00:11:55.026 "product_name": "Malloc disk", 00:11:55.026 "block_size": 512, 00:11:55.026 "num_blocks": 65536, 00:11:55.026 "uuid": "2a434cd5-ca20-450b-86f5-709f629c5b24", 00:11:55.026 "assigned_rate_limits": { 00:11:55.026 "rw_ios_per_sec": 0, 00:11:55.026 "rw_mbytes_per_sec": 0, 00:11:55.026 "r_mbytes_per_sec": 0, 00:11:55.026 "w_mbytes_per_sec": 0 00:11:55.026 }, 00:11:55.026 "claimed": false, 00:11:55.026 "zoned": false, 00:11:55.026 "supported_io_types": { 00:11:55.026 "read": true, 00:11:55.026 "write": true, 00:11:55.026 "unmap": true, 00:11:55.026 "flush": true, 00:11:55.026 "reset": true, 00:11:55.026 "nvme_admin": false, 00:11:55.026 "nvme_io": false, 00:11:55.026 "nvme_io_md": false, 00:11:55.026 "write_zeroes": true, 00:11:55.026 "zcopy": true, 00:11:55.026 "get_zone_info": false, 00:11:55.026 "zone_management": false, 00:11:55.026 "zone_append": false, 00:11:55.026 "compare": false, 00:11:55.026 "compare_and_write": false, 00:11:55.026 "abort": true, 00:11:55.026 "seek_hole": false, 00:11:55.026 "seek_data": false, 00:11:55.026 "copy": true, 00:11:55.026 "nvme_iov_md": false 00:11:55.026 }, 00:11:55.026 "memory_domains": [ 00:11:55.026 { 00:11:55.026 "dma_device_id": "system", 00:11:55.026 "dma_device_type": 1 00:11:55.026 }, 00:11:55.026 { 00:11:55.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.026 "dma_device_type": 2 00:11:55.026 } 00:11:55.026 ], 00:11:55.026 "driver_specific": {} 00:11:55.026 } 00:11:55.026 ] 00:11:55.026 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.026 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:55.026 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:55.026 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.026 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:55.026 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.026 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.026 BaseBdev3 00:11:55.026 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.026 14:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:55.026 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:55.026 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:55.026 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:55.026 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:55.026 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:55.026 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:55.026 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.026 14:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.026 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.026 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:55.026 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.026 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.285 [ 00:11:55.285 { 00:11:55.285 "name": "BaseBdev3", 00:11:55.285 "aliases": [ 00:11:55.285 "d55ad380-795d-4ff7-98eb-8d3c1f0ecb16" 00:11:55.285 ], 00:11:55.285 "product_name": "Malloc disk", 00:11:55.285 "block_size": 512, 00:11:55.285 "num_blocks": 65536, 00:11:55.285 "uuid": "d55ad380-795d-4ff7-98eb-8d3c1f0ecb16", 00:11:55.285 "assigned_rate_limits": { 00:11:55.285 "rw_ios_per_sec": 0, 00:11:55.285 "rw_mbytes_per_sec": 0, 00:11:55.285 "r_mbytes_per_sec": 0, 00:11:55.285 "w_mbytes_per_sec": 0 00:11:55.285 }, 00:11:55.285 "claimed": false, 00:11:55.285 "zoned": false, 00:11:55.285 "supported_io_types": { 00:11:55.285 "read": true, 00:11:55.285 "write": true, 00:11:55.285 "unmap": true, 00:11:55.285 "flush": true, 00:11:55.285 "reset": true, 00:11:55.285 "nvme_admin": false, 00:11:55.285 "nvme_io": false, 00:11:55.285 "nvme_io_md": false, 00:11:55.285 "write_zeroes": true, 00:11:55.285 "zcopy": true, 00:11:55.285 "get_zone_info": false, 00:11:55.285 "zone_management": false, 00:11:55.285 "zone_append": false, 00:11:55.285 "compare": false, 00:11:55.285 "compare_and_write": false, 00:11:55.285 "abort": true, 00:11:55.285 "seek_hole": false, 00:11:55.285 "seek_data": false, 00:11:55.285 "copy": true, 00:11:55.285 "nvme_iov_md": false 00:11:55.285 }, 00:11:55.285 "memory_domains": [ 00:11:55.285 { 00:11:55.285 "dma_device_id": "system", 00:11:55.285 "dma_device_type": 1 00:11:55.285 }, 00:11:55.285 { 00:11:55.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.285 "dma_device_type": 2 00:11:55.285 } 00:11:55.285 ], 00:11:55.285 "driver_specific": {} 00:11:55.285 } 00:11:55.285 ] 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.285 BaseBdev4 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.285 [ 00:11:55.285 { 00:11:55.285 "name": "BaseBdev4", 00:11:55.285 "aliases": [ 00:11:55.285 "7c537895-95cc-486c-bf3c-f6bbda5dc3ff" 00:11:55.285 ], 00:11:55.285 "product_name": "Malloc disk", 00:11:55.285 "block_size": 512, 00:11:55.285 "num_blocks": 65536, 00:11:55.285 "uuid": "7c537895-95cc-486c-bf3c-f6bbda5dc3ff", 00:11:55.285 "assigned_rate_limits": { 00:11:55.285 "rw_ios_per_sec": 0, 00:11:55.285 "rw_mbytes_per_sec": 0, 00:11:55.285 "r_mbytes_per_sec": 0, 00:11:55.285 "w_mbytes_per_sec": 0 00:11:55.285 }, 00:11:55.285 "claimed": false, 00:11:55.285 "zoned": false, 00:11:55.285 "supported_io_types": { 00:11:55.285 "read": true, 00:11:55.285 "write": true, 00:11:55.285 "unmap": true, 00:11:55.285 "flush": true, 00:11:55.285 "reset": true, 00:11:55.285 "nvme_admin": false, 00:11:55.285 "nvme_io": false, 00:11:55.285 "nvme_io_md": false, 00:11:55.285 "write_zeroes": true, 00:11:55.285 "zcopy": true, 00:11:55.285 "get_zone_info": false, 00:11:55.285 "zone_management": false, 00:11:55.285 "zone_append": false, 00:11:55.285 "compare": false, 00:11:55.285 "compare_and_write": false, 00:11:55.285 "abort": true, 00:11:55.285 "seek_hole": false, 00:11:55.285 "seek_data": false, 00:11:55.285 "copy": true, 00:11:55.285 "nvme_iov_md": false 00:11:55.285 }, 00:11:55.285 "memory_domains": [ 00:11:55.285 { 00:11:55.285 "dma_device_id": "system", 00:11:55.285 "dma_device_type": 1 00:11:55.285 }, 00:11:55.285 { 00:11:55.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.285 "dma_device_type": 2 00:11:55.285 } 00:11:55.285 ], 00:11:55.285 "driver_specific": {} 00:11:55.285 } 00:11:55.285 ] 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.285 [2024-11-20 14:22:34.103511] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:55.285 [2024-11-20 14:22:34.103572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:55.285 [2024-11-20 14:22:34.103605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.285 [2024-11-20 14:22:34.106020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:55.285 [2024-11-20 14:22:34.106235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:55.285 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.286 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.286 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:55.286 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.286 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.286 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.286 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.286 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.286 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.286 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.286 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.286 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.286 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.286 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.286 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.286 "name": "Existed_Raid", 00:11:55.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.286 "strip_size_kb": 64, 00:11:55.286 "state": "configuring", 00:11:55.286 "raid_level": "raid0", 00:11:55.286 "superblock": false, 00:11:55.286 "num_base_bdevs": 4, 00:11:55.286 "num_base_bdevs_discovered": 3, 00:11:55.286 "num_base_bdevs_operational": 4, 00:11:55.286 "base_bdevs_list": [ 00:11:55.286 { 00:11:55.286 "name": "BaseBdev1", 00:11:55.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.286 "is_configured": false, 00:11:55.286 "data_offset": 0, 00:11:55.286 "data_size": 0 00:11:55.286 }, 00:11:55.286 { 00:11:55.286 "name": "BaseBdev2", 00:11:55.286 "uuid": "2a434cd5-ca20-450b-86f5-709f629c5b24", 00:11:55.286 "is_configured": true, 00:11:55.286 "data_offset": 0, 00:11:55.286 "data_size": 65536 00:11:55.286 }, 00:11:55.286 { 00:11:55.286 "name": "BaseBdev3", 00:11:55.286 "uuid": "d55ad380-795d-4ff7-98eb-8d3c1f0ecb16", 00:11:55.286 "is_configured": true, 00:11:55.286 "data_offset": 0, 00:11:55.286 "data_size": 65536 00:11:55.286 }, 00:11:55.286 { 00:11:55.286 "name": "BaseBdev4", 00:11:55.286 "uuid": "7c537895-95cc-486c-bf3c-f6bbda5dc3ff", 00:11:55.286 "is_configured": true, 00:11:55.286 "data_offset": 0, 00:11:55.286 "data_size": 65536 00:11:55.286 } 00:11:55.286 ] 00:11:55.286 }' 00:11:55.286 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.286 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.853 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:55.853 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.853 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.853 [2024-11-20 14:22:34.607648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:55.853 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.854 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:55.854 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.854 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.854 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:55.854 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.854 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.854 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.854 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.854 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.854 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.854 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.854 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.854 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.854 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.854 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.854 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.854 "name": "Existed_Raid", 00:11:55.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.854 "strip_size_kb": 64, 00:11:55.854 "state": "configuring", 00:11:55.854 "raid_level": "raid0", 00:11:55.854 "superblock": false, 00:11:55.854 "num_base_bdevs": 4, 00:11:55.854 "num_base_bdevs_discovered": 2, 00:11:55.854 "num_base_bdevs_operational": 4, 00:11:55.854 "base_bdevs_list": [ 00:11:55.854 { 00:11:55.854 "name": "BaseBdev1", 00:11:55.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.854 "is_configured": false, 00:11:55.854 "data_offset": 0, 00:11:55.854 "data_size": 0 00:11:55.854 }, 00:11:55.854 { 00:11:55.854 "name": null, 00:11:55.854 "uuid": "2a434cd5-ca20-450b-86f5-709f629c5b24", 00:11:55.854 "is_configured": false, 00:11:55.854 "data_offset": 0, 00:11:55.854 "data_size": 65536 00:11:55.854 }, 00:11:55.854 { 00:11:55.854 "name": "BaseBdev3", 00:11:55.854 "uuid": "d55ad380-795d-4ff7-98eb-8d3c1f0ecb16", 00:11:55.854 "is_configured": true, 00:11:55.854 "data_offset": 0, 00:11:55.854 "data_size": 65536 00:11:55.854 }, 00:11:55.854 { 00:11:55.854 "name": "BaseBdev4", 00:11:55.854 "uuid": "7c537895-95cc-486c-bf3c-f6bbda5dc3ff", 00:11:55.854 "is_configured": true, 00:11:55.854 "data_offset": 0, 00:11:55.854 "data_size": 65536 00:11:55.854 } 00:11:55.854 ] 00:11:55.854 }' 00:11:55.854 14:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.854 14:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.423 [2024-11-20 14:22:35.217595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:56.423 BaseBdev1 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.423 [ 00:11:56.423 { 00:11:56.423 "name": "BaseBdev1", 00:11:56.423 "aliases": [ 00:11:56.423 "e0b8ae5d-a5cf-4ded-9d85-0297197fd2db" 00:11:56.423 ], 00:11:56.423 "product_name": "Malloc disk", 00:11:56.423 "block_size": 512, 00:11:56.423 "num_blocks": 65536, 00:11:56.423 "uuid": "e0b8ae5d-a5cf-4ded-9d85-0297197fd2db", 00:11:56.423 "assigned_rate_limits": { 00:11:56.423 "rw_ios_per_sec": 0, 00:11:56.423 "rw_mbytes_per_sec": 0, 00:11:56.423 "r_mbytes_per_sec": 0, 00:11:56.423 "w_mbytes_per_sec": 0 00:11:56.423 }, 00:11:56.423 "claimed": true, 00:11:56.423 "claim_type": "exclusive_write", 00:11:56.423 "zoned": false, 00:11:56.423 "supported_io_types": { 00:11:56.423 "read": true, 00:11:56.423 "write": true, 00:11:56.423 "unmap": true, 00:11:56.423 "flush": true, 00:11:56.423 "reset": true, 00:11:56.423 "nvme_admin": false, 00:11:56.423 "nvme_io": false, 00:11:56.423 "nvme_io_md": false, 00:11:56.423 "write_zeroes": true, 00:11:56.423 "zcopy": true, 00:11:56.423 "get_zone_info": false, 00:11:56.423 "zone_management": false, 00:11:56.423 "zone_append": false, 00:11:56.423 "compare": false, 00:11:56.423 "compare_and_write": false, 00:11:56.423 "abort": true, 00:11:56.423 "seek_hole": false, 00:11:56.423 "seek_data": false, 00:11:56.423 "copy": true, 00:11:56.423 "nvme_iov_md": false 00:11:56.423 }, 00:11:56.423 "memory_domains": [ 00:11:56.423 { 00:11:56.423 "dma_device_id": "system", 00:11:56.423 "dma_device_type": 1 00:11:56.423 }, 00:11:56.423 { 00:11:56.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.423 "dma_device_type": 2 00:11:56.423 } 00:11:56.423 ], 00:11:56.423 "driver_specific": {} 00:11:56.423 } 00:11:56.423 ] 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.423 "name": "Existed_Raid", 00:11:56.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.423 "strip_size_kb": 64, 00:11:56.423 "state": "configuring", 00:11:56.423 "raid_level": "raid0", 00:11:56.423 "superblock": false, 00:11:56.423 "num_base_bdevs": 4, 00:11:56.423 "num_base_bdevs_discovered": 3, 00:11:56.423 "num_base_bdevs_operational": 4, 00:11:56.423 "base_bdevs_list": [ 00:11:56.423 { 00:11:56.423 "name": "BaseBdev1", 00:11:56.423 "uuid": "e0b8ae5d-a5cf-4ded-9d85-0297197fd2db", 00:11:56.423 "is_configured": true, 00:11:56.423 "data_offset": 0, 00:11:56.423 "data_size": 65536 00:11:56.423 }, 00:11:56.423 { 00:11:56.423 "name": null, 00:11:56.423 "uuid": "2a434cd5-ca20-450b-86f5-709f629c5b24", 00:11:56.423 "is_configured": false, 00:11:56.423 "data_offset": 0, 00:11:56.423 "data_size": 65536 00:11:56.423 }, 00:11:56.423 { 00:11:56.423 "name": "BaseBdev3", 00:11:56.423 "uuid": "d55ad380-795d-4ff7-98eb-8d3c1f0ecb16", 00:11:56.423 "is_configured": true, 00:11:56.423 "data_offset": 0, 00:11:56.423 "data_size": 65536 00:11:56.423 }, 00:11:56.423 { 00:11:56.423 "name": "BaseBdev4", 00:11:56.423 "uuid": "7c537895-95cc-486c-bf3c-f6bbda5dc3ff", 00:11:56.423 "is_configured": true, 00:11:56.423 "data_offset": 0, 00:11:56.423 "data_size": 65536 00:11:56.423 } 00:11:56.423 ] 00:11:56.423 }' 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.423 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.992 [2024-11-20 14:22:35.789824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.992 "name": "Existed_Raid", 00:11:56.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.992 "strip_size_kb": 64, 00:11:56.992 "state": "configuring", 00:11:56.992 "raid_level": "raid0", 00:11:56.992 "superblock": false, 00:11:56.992 "num_base_bdevs": 4, 00:11:56.992 "num_base_bdevs_discovered": 2, 00:11:56.992 "num_base_bdevs_operational": 4, 00:11:56.992 "base_bdevs_list": [ 00:11:56.992 { 00:11:56.992 "name": "BaseBdev1", 00:11:56.992 "uuid": "e0b8ae5d-a5cf-4ded-9d85-0297197fd2db", 00:11:56.992 "is_configured": true, 00:11:56.992 "data_offset": 0, 00:11:56.992 "data_size": 65536 00:11:56.992 }, 00:11:56.992 { 00:11:56.992 "name": null, 00:11:56.992 "uuid": "2a434cd5-ca20-450b-86f5-709f629c5b24", 00:11:56.992 "is_configured": false, 00:11:56.992 "data_offset": 0, 00:11:56.992 "data_size": 65536 00:11:56.992 }, 00:11:56.992 { 00:11:56.992 "name": null, 00:11:56.992 "uuid": "d55ad380-795d-4ff7-98eb-8d3c1f0ecb16", 00:11:56.992 "is_configured": false, 00:11:56.992 "data_offset": 0, 00:11:56.992 "data_size": 65536 00:11:56.992 }, 00:11:56.992 { 00:11:56.992 "name": "BaseBdev4", 00:11:56.992 "uuid": "7c537895-95cc-486c-bf3c-f6bbda5dc3ff", 00:11:56.992 "is_configured": true, 00:11:56.992 "data_offset": 0, 00:11:56.992 "data_size": 65536 00:11:56.992 } 00:11:56.992 ] 00:11:56.992 }' 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.992 14:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.561 14:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.561 14:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.561 14:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.561 14:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:57.562 14:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.562 14:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:57.562 14:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:57.562 14:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.562 14:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.562 [2024-11-20 14:22:36.361977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:57.562 14:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.562 14:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:57.562 14:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.562 14:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.562 14:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.562 14:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.562 14:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.562 14:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.562 14:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.562 14:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.562 14:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.562 14:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.562 14:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.562 14:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.562 14:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.562 14:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.562 14:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.562 "name": "Existed_Raid", 00:11:57.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.562 "strip_size_kb": 64, 00:11:57.562 "state": "configuring", 00:11:57.562 "raid_level": "raid0", 00:11:57.562 "superblock": false, 00:11:57.562 "num_base_bdevs": 4, 00:11:57.562 "num_base_bdevs_discovered": 3, 00:11:57.562 "num_base_bdevs_operational": 4, 00:11:57.562 "base_bdevs_list": [ 00:11:57.562 { 00:11:57.562 "name": "BaseBdev1", 00:11:57.562 "uuid": "e0b8ae5d-a5cf-4ded-9d85-0297197fd2db", 00:11:57.562 "is_configured": true, 00:11:57.562 "data_offset": 0, 00:11:57.562 "data_size": 65536 00:11:57.562 }, 00:11:57.562 { 00:11:57.562 "name": null, 00:11:57.562 "uuid": "2a434cd5-ca20-450b-86f5-709f629c5b24", 00:11:57.562 "is_configured": false, 00:11:57.562 "data_offset": 0, 00:11:57.562 "data_size": 65536 00:11:57.562 }, 00:11:57.562 { 00:11:57.562 "name": "BaseBdev3", 00:11:57.562 "uuid": "d55ad380-795d-4ff7-98eb-8d3c1f0ecb16", 00:11:57.562 "is_configured": true, 00:11:57.562 "data_offset": 0, 00:11:57.562 "data_size": 65536 00:11:57.562 }, 00:11:57.562 { 00:11:57.562 "name": "BaseBdev4", 00:11:57.562 "uuid": "7c537895-95cc-486c-bf3c-f6bbda5dc3ff", 00:11:57.562 "is_configured": true, 00:11:57.562 "data_offset": 0, 00:11:57.562 "data_size": 65536 00:11:57.562 } 00:11:57.562 ] 00:11:57.562 }' 00:11:57.562 14:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.562 14:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.130 14:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:58.130 14:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.130 14:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.130 14:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.130 14:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.130 14:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:58.130 14:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:58.130 14:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.130 14:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.130 [2024-11-20 14:22:36.918137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:58.130 14:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.130 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:58.130 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.130 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.130 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.130 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.130 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.130 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.130 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.130 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.130 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.130 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.130 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.130 14:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.130 14:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.130 14:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.130 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.130 "name": "Existed_Raid", 00:11:58.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.130 "strip_size_kb": 64, 00:11:58.130 "state": "configuring", 00:11:58.130 "raid_level": "raid0", 00:11:58.130 "superblock": false, 00:11:58.130 "num_base_bdevs": 4, 00:11:58.130 "num_base_bdevs_discovered": 2, 00:11:58.130 "num_base_bdevs_operational": 4, 00:11:58.130 "base_bdevs_list": [ 00:11:58.130 { 00:11:58.130 "name": null, 00:11:58.130 "uuid": "e0b8ae5d-a5cf-4ded-9d85-0297197fd2db", 00:11:58.130 "is_configured": false, 00:11:58.130 "data_offset": 0, 00:11:58.130 "data_size": 65536 00:11:58.130 }, 00:11:58.130 { 00:11:58.130 "name": null, 00:11:58.130 "uuid": "2a434cd5-ca20-450b-86f5-709f629c5b24", 00:11:58.130 "is_configured": false, 00:11:58.130 "data_offset": 0, 00:11:58.130 "data_size": 65536 00:11:58.130 }, 00:11:58.130 { 00:11:58.130 "name": "BaseBdev3", 00:11:58.130 "uuid": "d55ad380-795d-4ff7-98eb-8d3c1f0ecb16", 00:11:58.130 "is_configured": true, 00:11:58.130 "data_offset": 0, 00:11:58.130 "data_size": 65536 00:11:58.130 }, 00:11:58.130 { 00:11:58.130 "name": "BaseBdev4", 00:11:58.130 "uuid": "7c537895-95cc-486c-bf3c-f6bbda5dc3ff", 00:11:58.130 "is_configured": true, 00:11:58.130 "data_offset": 0, 00:11:58.130 "data_size": 65536 00:11:58.130 } 00:11:58.130 ] 00:11:58.130 }' 00:11:58.130 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.130 14:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.697 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.697 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:58.697 14:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.697 14:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.698 14:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.698 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:58.698 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:58.698 14:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.698 14:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.698 [2024-11-20 14:22:37.574309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:58.698 14:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.698 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:58.698 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.698 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.698 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.698 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.698 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.698 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.698 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.698 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.698 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.698 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.698 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.698 14:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.698 14:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.698 14:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.698 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.698 "name": "Existed_Raid", 00:11:58.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.698 "strip_size_kb": 64, 00:11:58.698 "state": "configuring", 00:11:58.698 "raid_level": "raid0", 00:11:58.698 "superblock": false, 00:11:58.698 "num_base_bdevs": 4, 00:11:58.698 "num_base_bdevs_discovered": 3, 00:11:58.698 "num_base_bdevs_operational": 4, 00:11:58.698 "base_bdevs_list": [ 00:11:58.698 { 00:11:58.698 "name": null, 00:11:58.698 "uuid": "e0b8ae5d-a5cf-4ded-9d85-0297197fd2db", 00:11:58.698 "is_configured": false, 00:11:58.698 "data_offset": 0, 00:11:58.698 "data_size": 65536 00:11:58.698 }, 00:11:58.698 { 00:11:58.698 "name": "BaseBdev2", 00:11:58.698 "uuid": "2a434cd5-ca20-450b-86f5-709f629c5b24", 00:11:58.698 "is_configured": true, 00:11:58.698 "data_offset": 0, 00:11:58.698 "data_size": 65536 00:11:58.698 }, 00:11:58.698 { 00:11:58.698 "name": "BaseBdev3", 00:11:58.698 "uuid": "d55ad380-795d-4ff7-98eb-8d3c1f0ecb16", 00:11:58.698 "is_configured": true, 00:11:58.698 "data_offset": 0, 00:11:58.698 "data_size": 65536 00:11:58.698 }, 00:11:58.698 { 00:11:58.698 "name": "BaseBdev4", 00:11:58.698 "uuid": "7c537895-95cc-486c-bf3c-f6bbda5dc3ff", 00:11:58.698 "is_configured": true, 00:11:58.698 "data_offset": 0, 00:11:58.698 "data_size": 65536 00:11:58.698 } 00:11:58.698 ] 00:11:58.698 }' 00:11:58.698 14:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.698 14:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e0b8ae5d-a5cf-4ded-9d85-0297197fd2db 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.307 [2024-11-20 14:22:38.228075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:59.307 [2024-11-20 14:22:38.228133] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:59.307 [2024-11-20 14:22:38.228147] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:59.307 [2024-11-20 14:22:38.228496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:59.307 [2024-11-20 14:22:38.228681] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:59.307 [2024-11-20 14:22:38.228704] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:59.307 [2024-11-20 14:22:38.228979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.307 NewBaseBdev 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.307 [ 00:11:59.307 { 00:11:59.307 "name": "NewBaseBdev", 00:11:59.307 "aliases": [ 00:11:59.307 "e0b8ae5d-a5cf-4ded-9d85-0297197fd2db" 00:11:59.307 ], 00:11:59.307 "product_name": "Malloc disk", 00:11:59.307 "block_size": 512, 00:11:59.307 "num_blocks": 65536, 00:11:59.307 "uuid": "e0b8ae5d-a5cf-4ded-9d85-0297197fd2db", 00:11:59.307 "assigned_rate_limits": { 00:11:59.307 "rw_ios_per_sec": 0, 00:11:59.307 "rw_mbytes_per_sec": 0, 00:11:59.307 "r_mbytes_per_sec": 0, 00:11:59.307 "w_mbytes_per_sec": 0 00:11:59.307 }, 00:11:59.307 "claimed": true, 00:11:59.307 "claim_type": "exclusive_write", 00:11:59.307 "zoned": false, 00:11:59.307 "supported_io_types": { 00:11:59.307 "read": true, 00:11:59.307 "write": true, 00:11:59.307 "unmap": true, 00:11:59.307 "flush": true, 00:11:59.307 "reset": true, 00:11:59.307 "nvme_admin": false, 00:11:59.307 "nvme_io": false, 00:11:59.307 "nvme_io_md": false, 00:11:59.307 "write_zeroes": true, 00:11:59.307 "zcopy": true, 00:11:59.307 "get_zone_info": false, 00:11:59.307 "zone_management": false, 00:11:59.307 "zone_append": false, 00:11:59.307 "compare": false, 00:11:59.307 "compare_and_write": false, 00:11:59.307 "abort": true, 00:11:59.307 "seek_hole": false, 00:11:59.307 "seek_data": false, 00:11:59.307 "copy": true, 00:11:59.307 "nvme_iov_md": false 00:11:59.307 }, 00:11:59.307 "memory_domains": [ 00:11:59.307 { 00:11:59.307 "dma_device_id": "system", 00:11:59.307 "dma_device_type": 1 00:11:59.307 }, 00:11:59.307 { 00:11:59.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.307 "dma_device_type": 2 00:11:59.307 } 00:11:59.307 ], 00:11:59.307 "driver_specific": {} 00:11:59.307 } 00:11:59.307 ] 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.307 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.567 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.567 "name": "Existed_Raid", 00:11:59.567 "uuid": "5ef55343-a2a2-423e-ba74-56879bf2b7c3", 00:11:59.567 "strip_size_kb": 64, 00:11:59.567 "state": "online", 00:11:59.567 "raid_level": "raid0", 00:11:59.567 "superblock": false, 00:11:59.567 "num_base_bdevs": 4, 00:11:59.567 "num_base_bdevs_discovered": 4, 00:11:59.567 "num_base_bdevs_operational": 4, 00:11:59.567 "base_bdevs_list": [ 00:11:59.567 { 00:11:59.567 "name": "NewBaseBdev", 00:11:59.567 "uuid": "e0b8ae5d-a5cf-4ded-9d85-0297197fd2db", 00:11:59.567 "is_configured": true, 00:11:59.567 "data_offset": 0, 00:11:59.567 "data_size": 65536 00:11:59.567 }, 00:11:59.567 { 00:11:59.567 "name": "BaseBdev2", 00:11:59.567 "uuid": "2a434cd5-ca20-450b-86f5-709f629c5b24", 00:11:59.567 "is_configured": true, 00:11:59.567 "data_offset": 0, 00:11:59.567 "data_size": 65536 00:11:59.567 }, 00:11:59.567 { 00:11:59.567 "name": "BaseBdev3", 00:11:59.567 "uuid": "d55ad380-795d-4ff7-98eb-8d3c1f0ecb16", 00:11:59.567 "is_configured": true, 00:11:59.567 "data_offset": 0, 00:11:59.567 "data_size": 65536 00:11:59.567 }, 00:11:59.567 { 00:11:59.567 "name": "BaseBdev4", 00:11:59.567 "uuid": "7c537895-95cc-486c-bf3c-f6bbda5dc3ff", 00:11:59.567 "is_configured": true, 00:11:59.567 "data_offset": 0, 00:11:59.567 "data_size": 65536 00:11:59.567 } 00:11:59.567 ] 00:11:59.567 }' 00:11:59.567 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.567 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.827 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:59.827 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:59.827 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:59.827 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:59.827 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:59.827 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:59.828 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:59.828 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:59.828 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.828 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.828 [2024-11-20 14:22:38.712709] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.828 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.828 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:59.828 "name": "Existed_Raid", 00:11:59.828 "aliases": [ 00:11:59.828 "5ef55343-a2a2-423e-ba74-56879bf2b7c3" 00:11:59.828 ], 00:11:59.828 "product_name": "Raid Volume", 00:11:59.828 "block_size": 512, 00:11:59.828 "num_blocks": 262144, 00:11:59.828 "uuid": "5ef55343-a2a2-423e-ba74-56879bf2b7c3", 00:11:59.828 "assigned_rate_limits": { 00:11:59.828 "rw_ios_per_sec": 0, 00:11:59.828 "rw_mbytes_per_sec": 0, 00:11:59.828 "r_mbytes_per_sec": 0, 00:11:59.828 "w_mbytes_per_sec": 0 00:11:59.828 }, 00:11:59.828 "claimed": false, 00:11:59.828 "zoned": false, 00:11:59.828 "supported_io_types": { 00:11:59.828 "read": true, 00:11:59.828 "write": true, 00:11:59.828 "unmap": true, 00:11:59.828 "flush": true, 00:11:59.828 "reset": true, 00:11:59.828 "nvme_admin": false, 00:11:59.828 "nvme_io": false, 00:11:59.828 "nvme_io_md": false, 00:11:59.828 "write_zeroes": true, 00:11:59.828 "zcopy": false, 00:11:59.828 "get_zone_info": false, 00:11:59.828 "zone_management": false, 00:11:59.828 "zone_append": false, 00:11:59.828 "compare": false, 00:11:59.828 "compare_and_write": false, 00:11:59.828 "abort": false, 00:11:59.828 "seek_hole": false, 00:11:59.828 "seek_data": false, 00:11:59.828 "copy": false, 00:11:59.828 "nvme_iov_md": false 00:11:59.828 }, 00:11:59.828 "memory_domains": [ 00:11:59.828 { 00:11:59.828 "dma_device_id": "system", 00:11:59.828 "dma_device_type": 1 00:11:59.828 }, 00:11:59.828 { 00:11:59.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.828 "dma_device_type": 2 00:11:59.828 }, 00:11:59.828 { 00:11:59.828 "dma_device_id": "system", 00:11:59.828 "dma_device_type": 1 00:11:59.828 }, 00:11:59.828 { 00:11:59.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.828 "dma_device_type": 2 00:11:59.828 }, 00:11:59.828 { 00:11:59.828 "dma_device_id": "system", 00:11:59.828 "dma_device_type": 1 00:11:59.828 }, 00:11:59.828 { 00:11:59.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.828 "dma_device_type": 2 00:11:59.828 }, 00:11:59.828 { 00:11:59.828 "dma_device_id": "system", 00:11:59.828 "dma_device_type": 1 00:11:59.828 }, 00:11:59.828 { 00:11:59.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.828 "dma_device_type": 2 00:11:59.828 } 00:11:59.828 ], 00:11:59.828 "driver_specific": { 00:11:59.828 "raid": { 00:11:59.828 "uuid": "5ef55343-a2a2-423e-ba74-56879bf2b7c3", 00:11:59.828 "strip_size_kb": 64, 00:11:59.828 "state": "online", 00:11:59.828 "raid_level": "raid0", 00:11:59.828 "superblock": false, 00:11:59.828 "num_base_bdevs": 4, 00:11:59.828 "num_base_bdevs_discovered": 4, 00:11:59.828 "num_base_bdevs_operational": 4, 00:11:59.828 "base_bdevs_list": [ 00:11:59.828 { 00:11:59.828 "name": "NewBaseBdev", 00:11:59.828 "uuid": "e0b8ae5d-a5cf-4ded-9d85-0297197fd2db", 00:11:59.828 "is_configured": true, 00:11:59.828 "data_offset": 0, 00:11:59.828 "data_size": 65536 00:11:59.828 }, 00:11:59.828 { 00:11:59.828 "name": "BaseBdev2", 00:11:59.828 "uuid": "2a434cd5-ca20-450b-86f5-709f629c5b24", 00:11:59.828 "is_configured": true, 00:11:59.828 "data_offset": 0, 00:11:59.828 "data_size": 65536 00:11:59.828 }, 00:11:59.828 { 00:11:59.828 "name": "BaseBdev3", 00:11:59.828 "uuid": "d55ad380-795d-4ff7-98eb-8d3c1f0ecb16", 00:11:59.828 "is_configured": true, 00:11:59.828 "data_offset": 0, 00:11:59.828 "data_size": 65536 00:11:59.828 }, 00:11:59.828 { 00:11:59.828 "name": "BaseBdev4", 00:11:59.828 "uuid": "7c537895-95cc-486c-bf3c-f6bbda5dc3ff", 00:11:59.828 "is_configured": true, 00:11:59.828 "data_offset": 0, 00:11:59.828 "data_size": 65536 00:11:59.828 } 00:11:59.828 ] 00:11:59.828 } 00:11:59.828 } 00:11:59.828 }' 00:11:59.828 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:00.087 BaseBdev2 00:12:00.087 BaseBdev3 00:12:00.087 BaseBdev4' 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.087 14:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.087 14:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.087 14:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.087 14:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.087 14:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:00.087 14:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.087 14:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.087 14:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.087 14:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.087 14:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.087 14:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.087 14:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:00.087 14:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.087 14:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.349 [2024-11-20 14:22:39.068381] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:00.349 [2024-11-20 14:22:39.068423] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.349 [2024-11-20 14:22:39.068542] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.349 [2024-11-20 14:22:39.068634] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.349 [2024-11-20 14:22:39.068651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:00.349 14:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.349 14:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69470 00:12:00.350 14:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69470 ']' 00:12:00.350 14:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69470 00:12:00.350 14:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:00.350 14:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.350 14:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69470 00:12:00.350 14:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.350 14:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.350 killing process with pid 69470 00:12:00.350 14:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69470' 00:12:00.350 14:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69470 00:12:00.350 [2024-11-20 14:22:39.100769] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:00.350 14:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69470 00:12:00.610 [2024-11-20 14:22:39.463470] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:01.545 14:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:01.545 00:12:01.545 real 0m12.548s 00:12:01.545 user 0m20.760s 00:12:01.545 sys 0m1.711s 00:12:01.545 14:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.545 14:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.545 ************************************ 00:12:01.545 END TEST raid_state_function_test 00:12:01.545 ************************************ 00:12:01.805 14:22:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:12:01.805 14:22:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:01.805 14:22:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.805 14:22:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:01.805 ************************************ 00:12:01.805 START TEST raid_state_function_test_sb 00:12:01.805 ************************************ 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70152 00:12:01.805 Process raid pid: 70152 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70152' 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70152 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70152 ']' 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.805 14:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.805 [2024-11-20 14:22:40.655240] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:12:01.805 [2024-11-20 14:22:40.655405] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.064 [2024-11-20 14:22:40.832679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.064 [2024-11-20 14:22:40.963465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.324 [2024-11-20 14:22:41.172063] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.324 [2024-11-20 14:22:41.172124] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.890 14:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.890 14:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:02.890 14:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:02.890 14:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.890 14:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.890 [2024-11-20 14:22:41.663617] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:02.890 [2024-11-20 14:22:41.663684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:02.890 [2024-11-20 14:22:41.663702] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:02.890 [2024-11-20 14:22:41.663718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:02.890 [2024-11-20 14:22:41.663728] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:02.890 [2024-11-20 14:22:41.663743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:02.890 [2024-11-20 14:22:41.663752] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:02.890 [2024-11-20 14:22:41.663767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:02.890 14:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.890 14:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:02.890 14:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.890 14:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.890 14:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:02.891 14:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.891 14:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.891 14:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.891 14:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.891 14:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.891 14:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.891 14:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.891 14:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.891 14:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.891 14:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.891 14:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.891 14:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.891 "name": "Existed_Raid", 00:12:02.891 "uuid": "7c962c06-c5b3-4c3d-b881-d07a2b86a50e", 00:12:02.891 "strip_size_kb": 64, 00:12:02.891 "state": "configuring", 00:12:02.891 "raid_level": "raid0", 00:12:02.891 "superblock": true, 00:12:02.891 "num_base_bdevs": 4, 00:12:02.891 "num_base_bdevs_discovered": 0, 00:12:02.891 "num_base_bdevs_operational": 4, 00:12:02.891 "base_bdevs_list": [ 00:12:02.891 { 00:12:02.891 "name": "BaseBdev1", 00:12:02.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.891 "is_configured": false, 00:12:02.891 "data_offset": 0, 00:12:02.891 "data_size": 0 00:12:02.891 }, 00:12:02.891 { 00:12:02.891 "name": "BaseBdev2", 00:12:02.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.891 "is_configured": false, 00:12:02.891 "data_offset": 0, 00:12:02.891 "data_size": 0 00:12:02.891 }, 00:12:02.891 { 00:12:02.891 "name": "BaseBdev3", 00:12:02.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.891 "is_configured": false, 00:12:02.891 "data_offset": 0, 00:12:02.891 "data_size": 0 00:12:02.891 }, 00:12:02.891 { 00:12:02.891 "name": "BaseBdev4", 00:12:02.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.891 "is_configured": false, 00:12:02.891 "data_offset": 0, 00:12:02.891 "data_size": 0 00:12:02.891 } 00:12:02.891 ] 00:12:02.891 }' 00:12:02.891 14:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.891 14:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.457 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:03.457 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.457 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.457 [2024-11-20 14:22:42.179673] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:03.457 [2024-11-20 14:22:42.179724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.458 [2024-11-20 14:22:42.187682] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:03.458 [2024-11-20 14:22:42.187731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:03.458 [2024-11-20 14:22:42.187746] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:03.458 [2024-11-20 14:22:42.187761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:03.458 [2024-11-20 14:22:42.187771] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:03.458 [2024-11-20 14:22:42.187785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:03.458 [2024-11-20 14:22:42.187794] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:03.458 [2024-11-20 14:22:42.187808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.458 [2024-11-20 14:22:42.232353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.458 BaseBdev1 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.458 [ 00:12:03.458 { 00:12:03.458 "name": "BaseBdev1", 00:12:03.458 "aliases": [ 00:12:03.458 "54460ab1-f3f5-4213-a346-6270608be3fc" 00:12:03.458 ], 00:12:03.458 "product_name": "Malloc disk", 00:12:03.458 "block_size": 512, 00:12:03.458 "num_blocks": 65536, 00:12:03.458 "uuid": "54460ab1-f3f5-4213-a346-6270608be3fc", 00:12:03.458 "assigned_rate_limits": { 00:12:03.458 "rw_ios_per_sec": 0, 00:12:03.458 "rw_mbytes_per_sec": 0, 00:12:03.458 "r_mbytes_per_sec": 0, 00:12:03.458 "w_mbytes_per_sec": 0 00:12:03.458 }, 00:12:03.458 "claimed": true, 00:12:03.458 "claim_type": "exclusive_write", 00:12:03.458 "zoned": false, 00:12:03.458 "supported_io_types": { 00:12:03.458 "read": true, 00:12:03.458 "write": true, 00:12:03.458 "unmap": true, 00:12:03.458 "flush": true, 00:12:03.458 "reset": true, 00:12:03.458 "nvme_admin": false, 00:12:03.458 "nvme_io": false, 00:12:03.458 "nvme_io_md": false, 00:12:03.458 "write_zeroes": true, 00:12:03.458 "zcopy": true, 00:12:03.458 "get_zone_info": false, 00:12:03.458 "zone_management": false, 00:12:03.458 "zone_append": false, 00:12:03.458 "compare": false, 00:12:03.458 "compare_and_write": false, 00:12:03.458 "abort": true, 00:12:03.458 "seek_hole": false, 00:12:03.458 "seek_data": false, 00:12:03.458 "copy": true, 00:12:03.458 "nvme_iov_md": false 00:12:03.458 }, 00:12:03.458 "memory_domains": [ 00:12:03.458 { 00:12:03.458 "dma_device_id": "system", 00:12:03.458 "dma_device_type": 1 00:12:03.458 }, 00:12:03.458 { 00:12:03.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.458 "dma_device_type": 2 00:12:03.458 } 00:12:03.458 ], 00:12:03.458 "driver_specific": {} 00:12:03.458 } 00:12:03.458 ] 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.458 "name": "Existed_Raid", 00:12:03.458 "uuid": "75af7acb-d399-4745-a3f7-4a36de914829", 00:12:03.458 "strip_size_kb": 64, 00:12:03.458 "state": "configuring", 00:12:03.458 "raid_level": "raid0", 00:12:03.458 "superblock": true, 00:12:03.458 "num_base_bdevs": 4, 00:12:03.458 "num_base_bdevs_discovered": 1, 00:12:03.458 "num_base_bdevs_operational": 4, 00:12:03.458 "base_bdevs_list": [ 00:12:03.458 { 00:12:03.458 "name": "BaseBdev1", 00:12:03.458 "uuid": "54460ab1-f3f5-4213-a346-6270608be3fc", 00:12:03.458 "is_configured": true, 00:12:03.458 "data_offset": 2048, 00:12:03.458 "data_size": 63488 00:12:03.458 }, 00:12:03.458 { 00:12:03.458 "name": "BaseBdev2", 00:12:03.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.458 "is_configured": false, 00:12:03.458 "data_offset": 0, 00:12:03.458 "data_size": 0 00:12:03.458 }, 00:12:03.458 { 00:12:03.458 "name": "BaseBdev3", 00:12:03.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.458 "is_configured": false, 00:12:03.458 "data_offset": 0, 00:12:03.458 "data_size": 0 00:12:03.458 }, 00:12:03.458 { 00:12:03.458 "name": "BaseBdev4", 00:12:03.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.458 "is_configured": false, 00:12:03.458 "data_offset": 0, 00:12:03.458 "data_size": 0 00:12:03.458 } 00:12:03.458 ] 00:12:03.458 }' 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.458 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.025 [2024-11-20 14:22:42.768539] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:04.025 [2024-11-20 14:22:42.768607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.025 [2024-11-20 14:22:42.776601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:04.025 [2024-11-20 14:22:42.778982] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:04.025 [2024-11-20 14:22:42.779048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:04.025 [2024-11-20 14:22:42.779074] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:04.025 [2024-11-20 14:22:42.779092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:04.025 [2024-11-20 14:22:42.779103] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:04.025 [2024-11-20 14:22:42.779117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.025 "name": "Existed_Raid", 00:12:04.025 "uuid": "2fb891b8-47a1-4bcf-ac8d-6c00c11a8f17", 00:12:04.025 "strip_size_kb": 64, 00:12:04.025 "state": "configuring", 00:12:04.025 "raid_level": "raid0", 00:12:04.025 "superblock": true, 00:12:04.025 "num_base_bdevs": 4, 00:12:04.025 "num_base_bdevs_discovered": 1, 00:12:04.025 "num_base_bdevs_operational": 4, 00:12:04.025 "base_bdevs_list": [ 00:12:04.025 { 00:12:04.025 "name": "BaseBdev1", 00:12:04.025 "uuid": "54460ab1-f3f5-4213-a346-6270608be3fc", 00:12:04.025 "is_configured": true, 00:12:04.025 "data_offset": 2048, 00:12:04.025 "data_size": 63488 00:12:04.025 }, 00:12:04.025 { 00:12:04.025 "name": "BaseBdev2", 00:12:04.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.025 "is_configured": false, 00:12:04.025 "data_offset": 0, 00:12:04.025 "data_size": 0 00:12:04.025 }, 00:12:04.025 { 00:12:04.025 "name": "BaseBdev3", 00:12:04.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.025 "is_configured": false, 00:12:04.025 "data_offset": 0, 00:12:04.025 "data_size": 0 00:12:04.025 }, 00:12:04.025 { 00:12:04.025 "name": "BaseBdev4", 00:12:04.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.025 "is_configured": false, 00:12:04.025 "data_offset": 0, 00:12:04.025 "data_size": 0 00:12:04.025 } 00:12:04.025 ] 00:12:04.025 }' 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.025 14:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.592 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:04.592 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.592 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.592 [2024-11-20 14:22:43.359271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:04.592 BaseBdev2 00:12:04.592 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.592 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:04.592 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:04.592 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:04.592 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:04.592 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:04.592 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:04.592 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:04.592 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.592 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.592 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.592 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:04.592 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.592 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.592 [ 00:12:04.592 { 00:12:04.593 "name": "BaseBdev2", 00:12:04.593 "aliases": [ 00:12:04.593 "b0837b0c-59b9-4d82-b8a8-b9ba90d60a60" 00:12:04.593 ], 00:12:04.593 "product_name": "Malloc disk", 00:12:04.593 "block_size": 512, 00:12:04.593 "num_blocks": 65536, 00:12:04.593 "uuid": "b0837b0c-59b9-4d82-b8a8-b9ba90d60a60", 00:12:04.593 "assigned_rate_limits": { 00:12:04.593 "rw_ios_per_sec": 0, 00:12:04.593 "rw_mbytes_per_sec": 0, 00:12:04.593 "r_mbytes_per_sec": 0, 00:12:04.593 "w_mbytes_per_sec": 0 00:12:04.593 }, 00:12:04.593 "claimed": true, 00:12:04.593 "claim_type": "exclusive_write", 00:12:04.593 "zoned": false, 00:12:04.593 "supported_io_types": { 00:12:04.593 "read": true, 00:12:04.593 "write": true, 00:12:04.593 "unmap": true, 00:12:04.593 "flush": true, 00:12:04.593 "reset": true, 00:12:04.593 "nvme_admin": false, 00:12:04.593 "nvme_io": false, 00:12:04.593 "nvme_io_md": false, 00:12:04.593 "write_zeroes": true, 00:12:04.593 "zcopy": true, 00:12:04.593 "get_zone_info": false, 00:12:04.593 "zone_management": false, 00:12:04.593 "zone_append": false, 00:12:04.593 "compare": false, 00:12:04.593 "compare_and_write": false, 00:12:04.593 "abort": true, 00:12:04.593 "seek_hole": false, 00:12:04.593 "seek_data": false, 00:12:04.593 "copy": true, 00:12:04.593 "nvme_iov_md": false 00:12:04.593 }, 00:12:04.593 "memory_domains": [ 00:12:04.593 { 00:12:04.593 "dma_device_id": "system", 00:12:04.593 "dma_device_type": 1 00:12:04.593 }, 00:12:04.593 { 00:12:04.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.593 "dma_device_type": 2 00:12:04.593 } 00:12:04.593 ], 00:12:04.593 "driver_specific": {} 00:12:04.593 } 00:12:04.593 ] 00:12:04.593 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.593 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:04.593 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:04.593 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:04.593 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:04.593 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.593 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.593 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:04.593 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.593 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.593 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.593 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.593 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.593 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.593 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.593 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.593 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.593 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.593 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.593 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.593 "name": "Existed_Raid", 00:12:04.593 "uuid": "2fb891b8-47a1-4bcf-ac8d-6c00c11a8f17", 00:12:04.593 "strip_size_kb": 64, 00:12:04.593 "state": "configuring", 00:12:04.593 "raid_level": "raid0", 00:12:04.593 "superblock": true, 00:12:04.593 "num_base_bdevs": 4, 00:12:04.593 "num_base_bdevs_discovered": 2, 00:12:04.593 "num_base_bdevs_operational": 4, 00:12:04.593 "base_bdevs_list": [ 00:12:04.593 { 00:12:04.593 "name": "BaseBdev1", 00:12:04.593 "uuid": "54460ab1-f3f5-4213-a346-6270608be3fc", 00:12:04.593 "is_configured": true, 00:12:04.593 "data_offset": 2048, 00:12:04.593 "data_size": 63488 00:12:04.593 }, 00:12:04.593 { 00:12:04.593 "name": "BaseBdev2", 00:12:04.593 "uuid": "b0837b0c-59b9-4d82-b8a8-b9ba90d60a60", 00:12:04.593 "is_configured": true, 00:12:04.593 "data_offset": 2048, 00:12:04.593 "data_size": 63488 00:12:04.593 }, 00:12:04.593 { 00:12:04.593 "name": "BaseBdev3", 00:12:04.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.593 "is_configured": false, 00:12:04.593 "data_offset": 0, 00:12:04.593 "data_size": 0 00:12:04.593 }, 00:12:04.593 { 00:12:04.593 "name": "BaseBdev4", 00:12:04.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.593 "is_configured": false, 00:12:04.593 "data_offset": 0, 00:12:04.593 "data_size": 0 00:12:04.593 } 00:12:04.593 ] 00:12:04.593 }' 00:12:04.593 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.593 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.160 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:05.160 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.160 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.160 [2024-11-20 14:22:43.956377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:05.160 BaseBdev3 00:12:05.160 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.160 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.161 [ 00:12:05.161 { 00:12:05.161 "name": "BaseBdev3", 00:12:05.161 "aliases": [ 00:12:05.161 "cc9787a9-f58d-42e7-b0cc-c44eec7d135c" 00:12:05.161 ], 00:12:05.161 "product_name": "Malloc disk", 00:12:05.161 "block_size": 512, 00:12:05.161 "num_blocks": 65536, 00:12:05.161 "uuid": "cc9787a9-f58d-42e7-b0cc-c44eec7d135c", 00:12:05.161 "assigned_rate_limits": { 00:12:05.161 "rw_ios_per_sec": 0, 00:12:05.161 "rw_mbytes_per_sec": 0, 00:12:05.161 "r_mbytes_per_sec": 0, 00:12:05.161 "w_mbytes_per_sec": 0 00:12:05.161 }, 00:12:05.161 "claimed": true, 00:12:05.161 "claim_type": "exclusive_write", 00:12:05.161 "zoned": false, 00:12:05.161 "supported_io_types": { 00:12:05.161 "read": true, 00:12:05.161 "write": true, 00:12:05.161 "unmap": true, 00:12:05.161 "flush": true, 00:12:05.161 "reset": true, 00:12:05.161 "nvme_admin": false, 00:12:05.161 "nvme_io": false, 00:12:05.161 "nvme_io_md": false, 00:12:05.161 "write_zeroes": true, 00:12:05.161 "zcopy": true, 00:12:05.161 "get_zone_info": false, 00:12:05.161 "zone_management": false, 00:12:05.161 "zone_append": false, 00:12:05.161 "compare": false, 00:12:05.161 "compare_and_write": false, 00:12:05.161 "abort": true, 00:12:05.161 "seek_hole": false, 00:12:05.161 "seek_data": false, 00:12:05.161 "copy": true, 00:12:05.161 "nvme_iov_md": false 00:12:05.161 }, 00:12:05.161 "memory_domains": [ 00:12:05.161 { 00:12:05.161 "dma_device_id": "system", 00:12:05.161 "dma_device_type": 1 00:12:05.161 }, 00:12:05.161 { 00:12:05.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.161 "dma_device_type": 2 00:12:05.161 } 00:12:05.161 ], 00:12:05.161 "driver_specific": {} 00:12:05.161 } 00:12:05.161 ] 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.161 14:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.161 14:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.161 14:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.161 "name": "Existed_Raid", 00:12:05.161 "uuid": "2fb891b8-47a1-4bcf-ac8d-6c00c11a8f17", 00:12:05.161 "strip_size_kb": 64, 00:12:05.161 "state": "configuring", 00:12:05.161 "raid_level": "raid0", 00:12:05.161 "superblock": true, 00:12:05.161 "num_base_bdevs": 4, 00:12:05.161 "num_base_bdevs_discovered": 3, 00:12:05.161 "num_base_bdevs_operational": 4, 00:12:05.161 "base_bdevs_list": [ 00:12:05.161 { 00:12:05.161 "name": "BaseBdev1", 00:12:05.161 "uuid": "54460ab1-f3f5-4213-a346-6270608be3fc", 00:12:05.161 "is_configured": true, 00:12:05.161 "data_offset": 2048, 00:12:05.161 "data_size": 63488 00:12:05.161 }, 00:12:05.161 { 00:12:05.161 "name": "BaseBdev2", 00:12:05.161 "uuid": "b0837b0c-59b9-4d82-b8a8-b9ba90d60a60", 00:12:05.161 "is_configured": true, 00:12:05.161 "data_offset": 2048, 00:12:05.161 "data_size": 63488 00:12:05.161 }, 00:12:05.161 { 00:12:05.161 "name": "BaseBdev3", 00:12:05.161 "uuid": "cc9787a9-f58d-42e7-b0cc-c44eec7d135c", 00:12:05.161 "is_configured": true, 00:12:05.161 "data_offset": 2048, 00:12:05.161 "data_size": 63488 00:12:05.161 }, 00:12:05.161 { 00:12:05.161 "name": "BaseBdev4", 00:12:05.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.161 "is_configured": false, 00:12:05.161 "data_offset": 0, 00:12:05.161 "data_size": 0 00:12:05.161 } 00:12:05.161 ] 00:12:05.161 }' 00:12:05.161 14:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.161 14:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.729 [2024-11-20 14:22:44.538981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:05.729 [2024-11-20 14:22:44.539316] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:05.729 [2024-11-20 14:22:44.539337] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:05.729 BaseBdev4 00:12:05.729 [2024-11-20 14:22:44.539674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:05.729 [2024-11-20 14:22:44.539865] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:05.729 [2024-11-20 14:22:44.539887] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:05.729 [2024-11-20 14:22:44.540083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.729 [ 00:12:05.729 { 00:12:05.729 "name": "BaseBdev4", 00:12:05.729 "aliases": [ 00:12:05.729 "fc3fb1bf-e326-40c1-908b-2ca016f4dc08" 00:12:05.729 ], 00:12:05.729 "product_name": "Malloc disk", 00:12:05.729 "block_size": 512, 00:12:05.729 "num_blocks": 65536, 00:12:05.729 "uuid": "fc3fb1bf-e326-40c1-908b-2ca016f4dc08", 00:12:05.729 "assigned_rate_limits": { 00:12:05.729 "rw_ios_per_sec": 0, 00:12:05.729 "rw_mbytes_per_sec": 0, 00:12:05.729 "r_mbytes_per_sec": 0, 00:12:05.729 "w_mbytes_per_sec": 0 00:12:05.729 }, 00:12:05.729 "claimed": true, 00:12:05.729 "claim_type": "exclusive_write", 00:12:05.729 "zoned": false, 00:12:05.729 "supported_io_types": { 00:12:05.729 "read": true, 00:12:05.729 "write": true, 00:12:05.729 "unmap": true, 00:12:05.729 "flush": true, 00:12:05.729 "reset": true, 00:12:05.729 "nvme_admin": false, 00:12:05.729 "nvme_io": false, 00:12:05.729 "nvme_io_md": false, 00:12:05.729 "write_zeroes": true, 00:12:05.729 "zcopy": true, 00:12:05.729 "get_zone_info": false, 00:12:05.729 "zone_management": false, 00:12:05.729 "zone_append": false, 00:12:05.729 "compare": false, 00:12:05.729 "compare_and_write": false, 00:12:05.729 "abort": true, 00:12:05.729 "seek_hole": false, 00:12:05.729 "seek_data": false, 00:12:05.729 "copy": true, 00:12:05.729 "nvme_iov_md": false 00:12:05.729 }, 00:12:05.729 "memory_domains": [ 00:12:05.729 { 00:12:05.729 "dma_device_id": "system", 00:12:05.729 "dma_device_type": 1 00:12:05.729 }, 00:12:05.729 { 00:12:05.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.729 "dma_device_type": 2 00:12:05.729 } 00:12:05.729 ], 00:12:05.729 "driver_specific": {} 00:12:05.729 } 00:12:05.729 ] 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.729 14:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.730 14:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.730 14:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.730 "name": "Existed_Raid", 00:12:05.730 "uuid": "2fb891b8-47a1-4bcf-ac8d-6c00c11a8f17", 00:12:05.730 "strip_size_kb": 64, 00:12:05.730 "state": "online", 00:12:05.730 "raid_level": "raid0", 00:12:05.730 "superblock": true, 00:12:05.730 "num_base_bdevs": 4, 00:12:05.730 "num_base_bdevs_discovered": 4, 00:12:05.730 "num_base_bdevs_operational": 4, 00:12:05.730 "base_bdevs_list": [ 00:12:05.730 { 00:12:05.730 "name": "BaseBdev1", 00:12:05.730 "uuid": "54460ab1-f3f5-4213-a346-6270608be3fc", 00:12:05.730 "is_configured": true, 00:12:05.730 "data_offset": 2048, 00:12:05.730 "data_size": 63488 00:12:05.730 }, 00:12:05.730 { 00:12:05.730 "name": "BaseBdev2", 00:12:05.730 "uuid": "b0837b0c-59b9-4d82-b8a8-b9ba90d60a60", 00:12:05.730 "is_configured": true, 00:12:05.730 "data_offset": 2048, 00:12:05.730 "data_size": 63488 00:12:05.730 }, 00:12:05.730 { 00:12:05.730 "name": "BaseBdev3", 00:12:05.730 "uuid": "cc9787a9-f58d-42e7-b0cc-c44eec7d135c", 00:12:05.730 "is_configured": true, 00:12:05.730 "data_offset": 2048, 00:12:05.730 "data_size": 63488 00:12:05.730 }, 00:12:05.730 { 00:12:05.730 "name": "BaseBdev4", 00:12:05.730 "uuid": "fc3fb1bf-e326-40c1-908b-2ca016f4dc08", 00:12:05.730 "is_configured": true, 00:12:05.730 "data_offset": 2048, 00:12:05.730 "data_size": 63488 00:12:05.730 } 00:12:05.730 ] 00:12:05.730 }' 00:12:05.730 14:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.730 14:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.297 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:06.297 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:06.297 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:06.297 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:06.297 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:06.297 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:06.297 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:06.297 14:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.297 14:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.297 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:06.297 [2024-11-20 14:22:45.127702] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.297 14:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.297 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:06.297 "name": "Existed_Raid", 00:12:06.297 "aliases": [ 00:12:06.297 "2fb891b8-47a1-4bcf-ac8d-6c00c11a8f17" 00:12:06.297 ], 00:12:06.297 "product_name": "Raid Volume", 00:12:06.297 "block_size": 512, 00:12:06.297 "num_blocks": 253952, 00:12:06.297 "uuid": "2fb891b8-47a1-4bcf-ac8d-6c00c11a8f17", 00:12:06.297 "assigned_rate_limits": { 00:12:06.297 "rw_ios_per_sec": 0, 00:12:06.297 "rw_mbytes_per_sec": 0, 00:12:06.297 "r_mbytes_per_sec": 0, 00:12:06.297 "w_mbytes_per_sec": 0 00:12:06.297 }, 00:12:06.297 "claimed": false, 00:12:06.297 "zoned": false, 00:12:06.297 "supported_io_types": { 00:12:06.297 "read": true, 00:12:06.297 "write": true, 00:12:06.297 "unmap": true, 00:12:06.297 "flush": true, 00:12:06.297 "reset": true, 00:12:06.297 "nvme_admin": false, 00:12:06.297 "nvme_io": false, 00:12:06.297 "nvme_io_md": false, 00:12:06.297 "write_zeroes": true, 00:12:06.297 "zcopy": false, 00:12:06.297 "get_zone_info": false, 00:12:06.297 "zone_management": false, 00:12:06.297 "zone_append": false, 00:12:06.297 "compare": false, 00:12:06.297 "compare_and_write": false, 00:12:06.297 "abort": false, 00:12:06.297 "seek_hole": false, 00:12:06.297 "seek_data": false, 00:12:06.297 "copy": false, 00:12:06.297 "nvme_iov_md": false 00:12:06.297 }, 00:12:06.297 "memory_domains": [ 00:12:06.297 { 00:12:06.297 "dma_device_id": "system", 00:12:06.297 "dma_device_type": 1 00:12:06.297 }, 00:12:06.297 { 00:12:06.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.297 "dma_device_type": 2 00:12:06.297 }, 00:12:06.297 { 00:12:06.297 "dma_device_id": "system", 00:12:06.297 "dma_device_type": 1 00:12:06.297 }, 00:12:06.297 { 00:12:06.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.297 "dma_device_type": 2 00:12:06.297 }, 00:12:06.297 { 00:12:06.297 "dma_device_id": "system", 00:12:06.297 "dma_device_type": 1 00:12:06.297 }, 00:12:06.297 { 00:12:06.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.297 "dma_device_type": 2 00:12:06.297 }, 00:12:06.297 { 00:12:06.297 "dma_device_id": "system", 00:12:06.297 "dma_device_type": 1 00:12:06.297 }, 00:12:06.298 { 00:12:06.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.298 "dma_device_type": 2 00:12:06.298 } 00:12:06.298 ], 00:12:06.298 "driver_specific": { 00:12:06.298 "raid": { 00:12:06.298 "uuid": "2fb891b8-47a1-4bcf-ac8d-6c00c11a8f17", 00:12:06.298 "strip_size_kb": 64, 00:12:06.298 "state": "online", 00:12:06.298 "raid_level": "raid0", 00:12:06.298 "superblock": true, 00:12:06.298 "num_base_bdevs": 4, 00:12:06.298 "num_base_bdevs_discovered": 4, 00:12:06.298 "num_base_bdevs_operational": 4, 00:12:06.298 "base_bdevs_list": [ 00:12:06.298 { 00:12:06.298 "name": "BaseBdev1", 00:12:06.298 "uuid": "54460ab1-f3f5-4213-a346-6270608be3fc", 00:12:06.298 "is_configured": true, 00:12:06.298 "data_offset": 2048, 00:12:06.298 "data_size": 63488 00:12:06.298 }, 00:12:06.298 { 00:12:06.298 "name": "BaseBdev2", 00:12:06.298 "uuid": "b0837b0c-59b9-4d82-b8a8-b9ba90d60a60", 00:12:06.298 "is_configured": true, 00:12:06.298 "data_offset": 2048, 00:12:06.298 "data_size": 63488 00:12:06.298 }, 00:12:06.298 { 00:12:06.298 "name": "BaseBdev3", 00:12:06.298 "uuid": "cc9787a9-f58d-42e7-b0cc-c44eec7d135c", 00:12:06.298 "is_configured": true, 00:12:06.298 "data_offset": 2048, 00:12:06.298 "data_size": 63488 00:12:06.298 }, 00:12:06.298 { 00:12:06.298 "name": "BaseBdev4", 00:12:06.298 "uuid": "fc3fb1bf-e326-40c1-908b-2ca016f4dc08", 00:12:06.298 "is_configured": true, 00:12:06.298 "data_offset": 2048, 00:12:06.298 "data_size": 63488 00:12:06.298 } 00:12:06.298 ] 00:12:06.298 } 00:12:06.298 } 00:12:06.298 }' 00:12:06.298 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:06.298 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:06.298 BaseBdev2 00:12:06.298 BaseBdev3 00:12:06.298 BaseBdev4' 00:12:06.298 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.298 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:06.298 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.298 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:06.298 14:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.298 14:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.298 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.558 14:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.558 [2024-11-20 14:22:45.479354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:06.558 [2024-11-20 14:22:45.479395] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:06.558 [2024-11-20 14:22:45.479471] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.817 14:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.817 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:06.817 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:06.817 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:06.817 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:06.817 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:06.817 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:06.817 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.817 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:06.817 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:06.817 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.817 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:06.817 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.817 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.817 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.817 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.817 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.817 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.817 14:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.817 14:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.817 14:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.817 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.817 "name": "Existed_Raid", 00:12:06.817 "uuid": "2fb891b8-47a1-4bcf-ac8d-6c00c11a8f17", 00:12:06.817 "strip_size_kb": 64, 00:12:06.817 "state": "offline", 00:12:06.817 "raid_level": "raid0", 00:12:06.817 "superblock": true, 00:12:06.817 "num_base_bdevs": 4, 00:12:06.817 "num_base_bdevs_discovered": 3, 00:12:06.817 "num_base_bdevs_operational": 3, 00:12:06.817 "base_bdevs_list": [ 00:12:06.817 { 00:12:06.817 "name": null, 00:12:06.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.817 "is_configured": false, 00:12:06.817 "data_offset": 0, 00:12:06.817 "data_size": 63488 00:12:06.817 }, 00:12:06.817 { 00:12:06.817 "name": "BaseBdev2", 00:12:06.817 "uuid": "b0837b0c-59b9-4d82-b8a8-b9ba90d60a60", 00:12:06.817 "is_configured": true, 00:12:06.817 "data_offset": 2048, 00:12:06.817 "data_size": 63488 00:12:06.817 }, 00:12:06.817 { 00:12:06.817 "name": "BaseBdev3", 00:12:06.817 "uuid": "cc9787a9-f58d-42e7-b0cc-c44eec7d135c", 00:12:06.817 "is_configured": true, 00:12:06.817 "data_offset": 2048, 00:12:06.817 "data_size": 63488 00:12:06.817 }, 00:12:06.817 { 00:12:06.817 "name": "BaseBdev4", 00:12:06.817 "uuid": "fc3fb1bf-e326-40c1-908b-2ca016f4dc08", 00:12:06.817 "is_configured": true, 00:12:06.817 "data_offset": 2048, 00:12:06.817 "data_size": 63488 00:12:06.817 } 00:12:06.817 ] 00:12:06.817 }' 00:12:06.817 14:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.817 14:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.403 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:07.403 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.404 [2024-11-20 14:22:46.134286] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.404 [2024-11-20 14:22:46.277275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:07.404 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.663 [2024-11-20 14:22:46.419665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:07.663 [2024-11-20 14:22:46.419726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.663 BaseBdev2 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.663 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.663 [ 00:12:07.663 { 00:12:07.663 "name": "BaseBdev2", 00:12:07.663 "aliases": [ 00:12:07.663 "fd7bcf0b-bab7-440f-a7da-36a65c0105bb" 00:12:07.663 ], 00:12:07.663 "product_name": "Malloc disk", 00:12:07.663 "block_size": 512, 00:12:07.663 "num_blocks": 65536, 00:12:07.663 "uuid": "fd7bcf0b-bab7-440f-a7da-36a65c0105bb", 00:12:07.663 "assigned_rate_limits": { 00:12:07.663 "rw_ios_per_sec": 0, 00:12:07.663 "rw_mbytes_per_sec": 0, 00:12:07.663 "r_mbytes_per_sec": 0, 00:12:07.663 "w_mbytes_per_sec": 0 00:12:07.663 }, 00:12:07.663 "claimed": false, 00:12:07.663 "zoned": false, 00:12:07.663 "supported_io_types": { 00:12:07.663 "read": true, 00:12:07.663 "write": true, 00:12:07.663 "unmap": true, 00:12:07.663 "flush": true, 00:12:07.663 "reset": true, 00:12:07.663 "nvme_admin": false, 00:12:07.663 "nvme_io": false, 00:12:07.663 "nvme_io_md": false, 00:12:07.663 "write_zeroes": true, 00:12:07.663 "zcopy": true, 00:12:07.663 "get_zone_info": false, 00:12:07.663 "zone_management": false, 00:12:07.663 "zone_append": false, 00:12:07.663 "compare": false, 00:12:07.663 "compare_and_write": false, 00:12:07.663 "abort": true, 00:12:07.663 "seek_hole": false, 00:12:07.663 "seek_data": false, 00:12:07.663 "copy": true, 00:12:07.664 "nvme_iov_md": false 00:12:07.664 }, 00:12:07.664 "memory_domains": [ 00:12:07.664 { 00:12:07.664 "dma_device_id": "system", 00:12:07.664 "dma_device_type": 1 00:12:07.664 }, 00:12:07.664 { 00:12:07.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.664 "dma_device_type": 2 00:12:07.664 } 00:12:07.664 ], 00:12:07.664 "driver_specific": {} 00:12:07.664 } 00:12:07.664 ] 00:12:07.664 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.664 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:07.664 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:07.664 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:07.664 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:07.664 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.664 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.923 BaseBdev3 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.923 [ 00:12:07.923 { 00:12:07.923 "name": "BaseBdev3", 00:12:07.923 "aliases": [ 00:12:07.923 "43a050cf-3cb6-4f76-8177-e9475bf1f90d" 00:12:07.923 ], 00:12:07.923 "product_name": "Malloc disk", 00:12:07.923 "block_size": 512, 00:12:07.923 "num_blocks": 65536, 00:12:07.923 "uuid": "43a050cf-3cb6-4f76-8177-e9475bf1f90d", 00:12:07.923 "assigned_rate_limits": { 00:12:07.923 "rw_ios_per_sec": 0, 00:12:07.923 "rw_mbytes_per_sec": 0, 00:12:07.923 "r_mbytes_per_sec": 0, 00:12:07.923 "w_mbytes_per_sec": 0 00:12:07.923 }, 00:12:07.923 "claimed": false, 00:12:07.923 "zoned": false, 00:12:07.923 "supported_io_types": { 00:12:07.923 "read": true, 00:12:07.923 "write": true, 00:12:07.923 "unmap": true, 00:12:07.923 "flush": true, 00:12:07.923 "reset": true, 00:12:07.923 "nvme_admin": false, 00:12:07.923 "nvme_io": false, 00:12:07.923 "nvme_io_md": false, 00:12:07.923 "write_zeroes": true, 00:12:07.923 "zcopy": true, 00:12:07.923 "get_zone_info": false, 00:12:07.923 "zone_management": false, 00:12:07.923 "zone_append": false, 00:12:07.923 "compare": false, 00:12:07.923 "compare_and_write": false, 00:12:07.923 "abort": true, 00:12:07.923 "seek_hole": false, 00:12:07.923 "seek_data": false, 00:12:07.923 "copy": true, 00:12:07.923 "nvme_iov_md": false 00:12:07.923 }, 00:12:07.923 "memory_domains": [ 00:12:07.923 { 00:12:07.923 "dma_device_id": "system", 00:12:07.923 "dma_device_type": 1 00:12:07.923 }, 00:12:07.923 { 00:12:07.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.923 "dma_device_type": 2 00:12:07.923 } 00:12:07.923 ], 00:12:07.923 "driver_specific": {} 00:12:07.923 } 00:12:07.923 ] 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.923 BaseBdev4 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.923 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.923 [ 00:12:07.923 { 00:12:07.923 "name": "BaseBdev4", 00:12:07.923 "aliases": [ 00:12:07.923 "a4d02afd-7b1e-4adb-8e20-77be2b52e53e" 00:12:07.923 ], 00:12:07.923 "product_name": "Malloc disk", 00:12:07.923 "block_size": 512, 00:12:07.923 "num_blocks": 65536, 00:12:07.923 "uuid": "a4d02afd-7b1e-4adb-8e20-77be2b52e53e", 00:12:07.923 "assigned_rate_limits": { 00:12:07.923 "rw_ios_per_sec": 0, 00:12:07.923 "rw_mbytes_per_sec": 0, 00:12:07.923 "r_mbytes_per_sec": 0, 00:12:07.923 "w_mbytes_per_sec": 0 00:12:07.923 }, 00:12:07.923 "claimed": false, 00:12:07.923 "zoned": false, 00:12:07.923 "supported_io_types": { 00:12:07.923 "read": true, 00:12:07.923 "write": true, 00:12:07.923 "unmap": true, 00:12:07.923 "flush": true, 00:12:07.924 "reset": true, 00:12:07.924 "nvme_admin": false, 00:12:07.924 "nvme_io": false, 00:12:07.924 "nvme_io_md": false, 00:12:07.924 "write_zeroes": true, 00:12:07.924 "zcopy": true, 00:12:07.924 "get_zone_info": false, 00:12:07.924 "zone_management": false, 00:12:07.924 "zone_append": false, 00:12:07.924 "compare": false, 00:12:07.924 "compare_and_write": false, 00:12:07.924 "abort": true, 00:12:07.924 "seek_hole": false, 00:12:07.924 "seek_data": false, 00:12:07.924 "copy": true, 00:12:07.924 "nvme_iov_md": false 00:12:07.924 }, 00:12:07.924 "memory_domains": [ 00:12:07.924 { 00:12:07.924 "dma_device_id": "system", 00:12:07.924 "dma_device_type": 1 00:12:07.924 }, 00:12:07.924 { 00:12:07.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.924 "dma_device_type": 2 00:12:07.924 } 00:12:07.924 ], 00:12:07.924 "driver_specific": {} 00:12:07.924 } 00:12:07.924 ] 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.924 [2024-11-20 14:22:46.777927] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:07.924 [2024-11-20 14:22:46.778001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:07.924 [2024-11-20 14:22:46.778036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:07.924 [2024-11-20 14:22:46.780504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:07.924 [2024-11-20 14:22:46.780580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.924 "name": "Existed_Raid", 00:12:07.924 "uuid": "049cf621-ea59-466a-a868-1b17065f7d1f", 00:12:07.924 "strip_size_kb": 64, 00:12:07.924 "state": "configuring", 00:12:07.924 "raid_level": "raid0", 00:12:07.924 "superblock": true, 00:12:07.924 "num_base_bdevs": 4, 00:12:07.924 "num_base_bdevs_discovered": 3, 00:12:07.924 "num_base_bdevs_operational": 4, 00:12:07.924 "base_bdevs_list": [ 00:12:07.924 { 00:12:07.924 "name": "BaseBdev1", 00:12:07.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.924 "is_configured": false, 00:12:07.924 "data_offset": 0, 00:12:07.924 "data_size": 0 00:12:07.924 }, 00:12:07.924 { 00:12:07.924 "name": "BaseBdev2", 00:12:07.924 "uuid": "fd7bcf0b-bab7-440f-a7da-36a65c0105bb", 00:12:07.924 "is_configured": true, 00:12:07.924 "data_offset": 2048, 00:12:07.924 "data_size": 63488 00:12:07.924 }, 00:12:07.924 { 00:12:07.924 "name": "BaseBdev3", 00:12:07.924 "uuid": "43a050cf-3cb6-4f76-8177-e9475bf1f90d", 00:12:07.924 "is_configured": true, 00:12:07.924 "data_offset": 2048, 00:12:07.924 "data_size": 63488 00:12:07.924 }, 00:12:07.924 { 00:12:07.924 "name": "BaseBdev4", 00:12:07.924 "uuid": "a4d02afd-7b1e-4adb-8e20-77be2b52e53e", 00:12:07.924 "is_configured": true, 00:12:07.924 "data_offset": 2048, 00:12:07.924 "data_size": 63488 00:12:07.924 } 00:12:07.924 ] 00:12:07.924 }' 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.924 14:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.492 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:08.492 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.492 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.492 [2024-11-20 14:22:47.286097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:08.492 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.492 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:08.492 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.492 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.492 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.492 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.492 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.492 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.492 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.492 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.492 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.492 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.492 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.492 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.492 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.492 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.492 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.492 "name": "Existed_Raid", 00:12:08.492 "uuid": "049cf621-ea59-466a-a868-1b17065f7d1f", 00:12:08.492 "strip_size_kb": 64, 00:12:08.492 "state": "configuring", 00:12:08.492 "raid_level": "raid0", 00:12:08.492 "superblock": true, 00:12:08.492 "num_base_bdevs": 4, 00:12:08.492 "num_base_bdevs_discovered": 2, 00:12:08.492 "num_base_bdevs_operational": 4, 00:12:08.492 "base_bdevs_list": [ 00:12:08.492 { 00:12:08.492 "name": "BaseBdev1", 00:12:08.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.492 "is_configured": false, 00:12:08.492 "data_offset": 0, 00:12:08.492 "data_size": 0 00:12:08.492 }, 00:12:08.492 { 00:12:08.492 "name": null, 00:12:08.492 "uuid": "fd7bcf0b-bab7-440f-a7da-36a65c0105bb", 00:12:08.492 "is_configured": false, 00:12:08.492 "data_offset": 0, 00:12:08.492 "data_size": 63488 00:12:08.492 }, 00:12:08.492 { 00:12:08.492 "name": "BaseBdev3", 00:12:08.492 "uuid": "43a050cf-3cb6-4f76-8177-e9475bf1f90d", 00:12:08.492 "is_configured": true, 00:12:08.492 "data_offset": 2048, 00:12:08.492 "data_size": 63488 00:12:08.492 }, 00:12:08.492 { 00:12:08.492 "name": "BaseBdev4", 00:12:08.492 "uuid": "a4d02afd-7b1e-4adb-8e20-77be2b52e53e", 00:12:08.492 "is_configured": true, 00:12:08.492 "data_offset": 2048, 00:12:08.492 "data_size": 63488 00:12:08.492 } 00:12:08.492 ] 00:12:08.492 }' 00:12:08.492 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.492 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.059 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.059 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.059 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:09.059 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.059 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.059 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:09.059 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:09.059 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.059 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.059 [2024-11-20 14:22:47.888047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:09.059 BaseBdev1 00:12:09.059 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.059 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:09.059 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:09.059 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:09.059 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:09.059 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:09.059 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:09.059 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:09.059 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.059 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.059 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.059 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:09.059 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.059 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.059 [ 00:12:09.059 { 00:12:09.059 "name": "BaseBdev1", 00:12:09.059 "aliases": [ 00:12:09.059 "aefec344-febb-4ad4-832a-34a7f2ef774d" 00:12:09.059 ], 00:12:09.059 "product_name": "Malloc disk", 00:12:09.059 "block_size": 512, 00:12:09.059 "num_blocks": 65536, 00:12:09.059 "uuid": "aefec344-febb-4ad4-832a-34a7f2ef774d", 00:12:09.059 "assigned_rate_limits": { 00:12:09.059 "rw_ios_per_sec": 0, 00:12:09.059 "rw_mbytes_per_sec": 0, 00:12:09.059 "r_mbytes_per_sec": 0, 00:12:09.059 "w_mbytes_per_sec": 0 00:12:09.059 }, 00:12:09.059 "claimed": true, 00:12:09.059 "claim_type": "exclusive_write", 00:12:09.059 "zoned": false, 00:12:09.059 "supported_io_types": { 00:12:09.059 "read": true, 00:12:09.059 "write": true, 00:12:09.059 "unmap": true, 00:12:09.059 "flush": true, 00:12:09.059 "reset": true, 00:12:09.059 "nvme_admin": false, 00:12:09.059 "nvme_io": false, 00:12:09.059 "nvme_io_md": false, 00:12:09.059 "write_zeroes": true, 00:12:09.059 "zcopy": true, 00:12:09.059 "get_zone_info": false, 00:12:09.059 "zone_management": false, 00:12:09.059 "zone_append": false, 00:12:09.059 "compare": false, 00:12:09.059 "compare_and_write": false, 00:12:09.059 "abort": true, 00:12:09.059 "seek_hole": false, 00:12:09.059 "seek_data": false, 00:12:09.059 "copy": true, 00:12:09.059 "nvme_iov_md": false 00:12:09.060 }, 00:12:09.060 "memory_domains": [ 00:12:09.060 { 00:12:09.060 "dma_device_id": "system", 00:12:09.060 "dma_device_type": 1 00:12:09.060 }, 00:12:09.060 { 00:12:09.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.060 "dma_device_type": 2 00:12:09.060 } 00:12:09.060 ], 00:12:09.060 "driver_specific": {} 00:12:09.060 } 00:12:09.060 ] 00:12:09.060 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.060 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:09.060 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:09.060 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.060 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.060 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:09.060 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.060 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.060 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.060 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.060 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.060 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.060 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.060 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.060 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.060 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.060 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.060 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.060 "name": "Existed_Raid", 00:12:09.060 "uuid": "049cf621-ea59-466a-a868-1b17065f7d1f", 00:12:09.060 "strip_size_kb": 64, 00:12:09.060 "state": "configuring", 00:12:09.060 "raid_level": "raid0", 00:12:09.060 "superblock": true, 00:12:09.060 "num_base_bdevs": 4, 00:12:09.060 "num_base_bdevs_discovered": 3, 00:12:09.060 "num_base_bdevs_operational": 4, 00:12:09.060 "base_bdevs_list": [ 00:12:09.060 { 00:12:09.060 "name": "BaseBdev1", 00:12:09.060 "uuid": "aefec344-febb-4ad4-832a-34a7f2ef774d", 00:12:09.060 "is_configured": true, 00:12:09.060 "data_offset": 2048, 00:12:09.060 "data_size": 63488 00:12:09.060 }, 00:12:09.060 { 00:12:09.060 "name": null, 00:12:09.060 "uuid": "fd7bcf0b-bab7-440f-a7da-36a65c0105bb", 00:12:09.060 "is_configured": false, 00:12:09.060 "data_offset": 0, 00:12:09.060 "data_size": 63488 00:12:09.060 }, 00:12:09.060 { 00:12:09.060 "name": "BaseBdev3", 00:12:09.060 "uuid": "43a050cf-3cb6-4f76-8177-e9475bf1f90d", 00:12:09.060 "is_configured": true, 00:12:09.060 "data_offset": 2048, 00:12:09.060 "data_size": 63488 00:12:09.060 }, 00:12:09.060 { 00:12:09.060 "name": "BaseBdev4", 00:12:09.060 "uuid": "a4d02afd-7b1e-4adb-8e20-77be2b52e53e", 00:12:09.060 "is_configured": true, 00:12:09.060 "data_offset": 2048, 00:12:09.060 "data_size": 63488 00:12:09.060 } 00:12:09.060 ] 00:12:09.060 }' 00:12:09.060 14:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.060 14:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.628 [2024-11-20 14:22:48.468273] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.628 "name": "Existed_Raid", 00:12:09.628 "uuid": "049cf621-ea59-466a-a868-1b17065f7d1f", 00:12:09.628 "strip_size_kb": 64, 00:12:09.628 "state": "configuring", 00:12:09.628 "raid_level": "raid0", 00:12:09.628 "superblock": true, 00:12:09.628 "num_base_bdevs": 4, 00:12:09.628 "num_base_bdevs_discovered": 2, 00:12:09.628 "num_base_bdevs_operational": 4, 00:12:09.628 "base_bdevs_list": [ 00:12:09.628 { 00:12:09.628 "name": "BaseBdev1", 00:12:09.628 "uuid": "aefec344-febb-4ad4-832a-34a7f2ef774d", 00:12:09.628 "is_configured": true, 00:12:09.628 "data_offset": 2048, 00:12:09.628 "data_size": 63488 00:12:09.628 }, 00:12:09.628 { 00:12:09.628 "name": null, 00:12:09.628 "uuid": "fd7bcf0b-bab7-440f-a7da-36a65c0105bb", 00:12:09.628 "is_configured": false, 00:12:09.628 "data_offset": 0, 00:12:09.628 "data_size": 63488 00:12:09.628 }, 00:12:09.628 { 00:12:09.628 "name": null, 00:12:09.628 "uuid": "43a050cf-3cb6-4f76-8177-e9475bf1f90d", 00:12:09.628 "is_configured": false, 00:12:09.628 "data_offset": 0, 00:12:09.628 "data_size": 63488 00:12:09.628 }, 00:12:09.628 { 00:12:09.628 "name": "BaseBdev4", 00:12:09.628 "uuid": "a4d02afd-7b1e-4adb-8e20-77be2b52e53e", 00:12:09.628 "is_configured": true, 00:12:09.628 "data_offset": 2048, 00:12:09.628 "data_size": 63488 00:12:09.628 } 00:12:09.628 ] 00:12:09.628 }' 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.628 14:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.197 [2024-11-20 14:22:49.108415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.197 "name": "Existed_Raid", 00:12:10.197 "uuid": "049cf621-ea59-466a-a868-1b17065f7d1f", 00:12:10.197 "strip_size_kb": 64, 00:12:10.197 "state": "configuring", 00:12:10.197 "raid_level": "raid0", 00:12:10.197 "superblock": true, 00:12:10.197 "num_base_bdevs": 4, 00:12:10.197 "num_base_bdevs_discovered": 3, 00:12:10.197 "num_base_bdevs_operational": 4, 00:12:10.197 "base_bdevs_list": [ 00:12:10.197 { 00:12:10.197 "name": "BaseBdev1", 00:12:10.197 "uuid": "aefec344-febb-4ad4-832a-34a7f2ef774d", 00:12:10.197 "is_configured": true, 00:12:10.197 "data_offset": 2048, 00:12:10.197 "data_size": 63488 00:12:10.197 }, 00:12:10.197 { 00:12:10.197 "name": null, 00:12:10.197 "uuid": "fd7bcf0b-bab7-440f-a7da-36a65c0105bb", 00:12:10.197 "is_configured": false, 00:12:10.197 "data_offset": 0, 00:12:10.197 "data_size": 63488 00:12:10.197 }, 00:12:10.197 { 00:12:10.197 "name": "BaseBdev3", 00:12:10.197 "uuid": "43a050cf-3cb6-4f76-8177-e9475bf1f90d", 00:12:10.197 "is_configured": true, 00:12:10.197 "data_offset": 2048, 00:12:10.197 "data_size": 63488 00:12:10.197 }, 00:12:10.197 { 00:12:10.197 "name": "BaseBdev4", 00:12:10.197 "uuid": "a4d02afd-7b1e-4adb-8e20-77be2b52e53e", 00:12:10.197 "is_configured": true, 00:12:10.197 "data_offset": 2048, 00:12:10.197 "data_size": 63488 00:12:10.197 } 00:12:10.197 ] 00:12:10.197 }' 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.197 14:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.765 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.765 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:10.765 14:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.765 14:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.765 14:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.765 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:10.765 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:10.765 14:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.765 14:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.765 [2024-11-20 14:22:49.676616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:11.024 14:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.024 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:11.024 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.024 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.024 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:11.024 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.024 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.024 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.024 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.024 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.024 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.024 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.024 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.024 14:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.024 14:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.024 14:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.024 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.024 "name": "Existed_Raid", 00:12:11.024 "uuid": "049cf621-ea59-466a-a868-1b17065f7d1f", 00:12:11.024 "strip_size_kb": 64, 00:12:11.024 "state": "configuring", 00:12:11.024 "raid_level": "raid0", 00:12:11.024 "superblock": true, 00:12:11.024 "num_base_bdevs": 4, 00:12:11.024 "num_base_bdevs_discovered": 2, 00:12:11.024 "num_base_bdevs_operational": 4, 00:12:11.024 "base_bdevs_list": [ 00:12:11.024 { 00:12:11.024 "name": null, 00:12:11.024 "uuid": "aefec344-febb-4ad4-832a-34a7f2ef774d", 00:12:11.024 "is_configured": false, 00:12:11.024 "data_offset": 0, 00:12:11.024 "data_size": 63488 00:12:11.024 }, 00:12:11.024 { 00:12:11.024 "name": null, 00:12:11.024 "uuid": "fd7bcf0b-bab7-440f-a7da-36a65c0105bb", 00:12:11.024 "is_configured": false, 00:12:11.024 "data_offset": 0, 00:12:11.024 "data_size": 63488 00:12:11.024 }, 00:12:11.024 { 00:12:11.024 "name": "BaseBdev3", 00:12:11.024 "uuid": "43a050cf-3cb6-4f76-8177-e9475bf1f90d", 00:12:11.024 "is_configured": true, 00:12:11.024 "data_offset": 2048, 00:12:11.024 "data_size": 63488 00:12:11.024 }, 00:12:11.024 { 00:12:11.024 "name": "BaseBdev4", 00:12:11.024 "uuid": "a4d02afd-7b1e-4adb-8e20-77be2b52e53e", 00:12:11.024 "is_configured": true, 00:12:11.024 "data_offset": 2048, 00:12:11.024 "data_size": 63488 00:12:11.024 } 00:12:11.024 ] 00:12:11.024 }' 00:12:11.024 14:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.024 14:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.591 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.591 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.591 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.592 [2024-11-20 14:22:50.333954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.592 "name": "Existed_Raid", 00:12:11.592 "uuid": "049cf621-ea59-466a-a868-1b17065f7d1f", 00:12:11.592 "strip_size_kb": 64, 00:12:11.592 "state": "configuring", 00:12:11.592 "raid_level": "raid0", 00:12:11.592 "superblock": true, 00:12:11.592 "num_base_bdevs": 4, 00:12:11.592 "num_base_bdevs_discovered": 3, 00:12:11.592 "num_base_bdevs_operational": 4, 00:12:11.592 "base_bdevs_list": [ 00:12:11.592 { 00:12:11.592 "name": null, 00:12:11.592 "uuid": "aefec344-febb-4ad4-832a-34a7f2ef774d", 00:12:11.592 "is_configured": false, 00:12:11.592 "data_offset": 0, 00:12:11.592 "data_size": 63488 00:12:11.592 }, 00:12:11.592 { 00:12:11.592 "name": "BaseBdev2", 00:12:11.592 "uuid": "fd7bcf0b-bab7-440f-a7da-36a65c0105bb", 00:12:11.592 "is_configured": true, 00:12:11.592 "data_offset": 2048, 00:12:11.592 "data_size": 63488 00:12:11.592 }, 00:12:11.592 { 00:12:11.592 "name": "BaseBdev3", 00:12:11.592 "uuid": "43a050cf-3cb6-4f76-8177-e9475bf1f90d", 00:12:11.592 "is_configured": true, 00:12:11.592 "data_offset": 2048, 00:12:11.592 "data_size": 63488 00:12:11.592 }, 00:12:11.592 { 00:12:11.592 "name": "BaseBdev4", 00:12:11.592 "uuid": "a4d02afd-7b1e-4adb-8e20-77be2b52e53e", 00:12:11.592 "is_configured": true, 00:12:11.592 "data_offset": 2048, 00:12:11.592 "data_size": 63488 00:12:11.592 } 00:12:11.592 ] 00:12:11.592 }' 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.592 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u aefec344-febb-4ad4-832a-34a7f2ef774d 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.160 [2024-11-20 14:22:50.983849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:12.160 [2024-11-20 14:22:50.984166] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:12.160 [2024-11-20 14:22:50.984185] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:12.160 [2024-11-20 14:22:50.984512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:12.160 [2024-11-20 14:22:50.984698] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:12.160 [2024-11-20 14:22:50.984721] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:12.160 NewBaseBdev 00:12:12.160 [2024-11-20 14:22:50.984876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.160 14:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.160 [ 00:12:12.160 { 00:12:12.160 "name": "NewBaseBdev", 00:12:12.160 "aliases": [ 00:12:12.160 "aefec344-febb-4ad4-832a-34a7f2ef774d" 00:12:12.160 ], 00:12:12.160 "product_name": "Malloc disk", 00:12:12.160 "block_size": 512, 00:12:12.161 "num_blocks": 65536, 00:12:12.161 "uuid": "aefec344-febb-4ad4-832a-34a7f2ef774d", 00:12:12.161 "assigned_rate_limits": { 00:12:12.161 "rw_ios_per_sec": 0, 00:12:12.161 "rw_mbytes_per_sec": 0, 00:12:12.161 "r_mbytes_per_sec": 0, 00:12:12.161 "w_mbytes_per_sec": 0 00:12:12.161 }, 00:12:12.161 "claimed": true, 00:12:12.161 "claim_type": "exclusive_write", 00:12:12.161 "zoned": false, 00:12:12.161 "supported_io_types": { 00:12:12.161 "read": true, 00:12:12.161 "write": true, 00:12:12.161 "unmap": true, 00:12:12.161 "flush": true, 00:12:12.161 "reset": true, 00:12:12.161 "nvme_admin": false, 00:12:12.161 "nvme_io": false, 00:12:12.161 "nvme_io_md": false, 00:12:12.161 "write_zeroes": true, 00:12:12.161 "zcopy": true, 00:12:12.161 "get_zone_info": false, 00:12:12.161 "zone_management": false, 00:12:12.161 "zone_append": false, 00:12:12.161 "compare": false, 00:12:12.161 "compare_and_write": false, 00:12:12.161 "abort": true, 00:12:12.161 "seek_hole": false, 00:12:12.161 "seek_data": false, 00:12:12.161 "copy": true, 00:12:12.161 "nvme_iov_md": false 00:12:12.161 }, 00:12:12.161 "memory_domains": [ 00:12:12.161 { 00:12:12.161 "dma_device_id": "system", 00:12:12.161 "dma_device_type": 1 00:12:12.161 }, 00:12:12.161 { 00:12:12.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.161 "dma_device_type": 2 00:12:12.161 } 00:12:12.161 ], 00:12:12.161 "driver_specific": {} 00:12:12.161 } 00:12:12.161 ] 00:12:12.161 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.161 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:12.161 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:12.161 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.161 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.161 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:12.161 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.161 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.161 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.161 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.161 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.161 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.161 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.161 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.161 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.161 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.161 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.161 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.161 "name": "Existed_Raid", 00:12:12.161 "uuid": "049cf621-ea59-466a-a868-1b17065f7d1f", 00:12:12.161 "strip_size_kb": 64, 00:12:12.161 "state": "online", 00:12:12.161 "raid_level": "raid0", 00:12:12.161 "superblock": true, 00:12:12.161 "num_base_bdevs": 4, 00:12:12.161 "num_base_bdevs_discovered": 4, 00:12:12.161 "num_base_bdevs_operational": 4, 00:12:12.161 "base_bdevs_list": [ 00:12:12.161 { 00:12:12.161 "name": "NewBaseBdev", 00:12:12.161 "uuid": "aefec344-febb-4ad4-832a-34a7f2ef774d", 00:12:12.161 "is_configured": true, 00:12:12.161 "data_offset": 2048, 00:12:12.161 "data_size": 63488 00:12:12.161 }, 00:12:12.161 { 00:12:12.161 "name": "BaseBdev2", 00:12:12.161 "uuid": "fd7bcf0b-bab7-440f-a7da-36a65c0105bb", 00:12:12.161 "is_configured": true, 00:12:12.161 "data_offset": 2048, 00:12:12.161 "data_size": 63488 00:12:12.161 }, 00:12:12.161 { 00:12:12.161 "name": "BaseBdev3", 00:12:12.161 "uuid": "43a050cf-3cb6-4f76-8177-e9475bf1f90d", 00:12:12.161 "is_configured": true, 00:12:12.161 "data_offset": 2048, 00:12:12.161 "data_size": 63488 00:12:12.161 }, 00:12:12.161 { 00:12:12.161 "name": "BaseBdev4", 00:12:12.161 "uuid": "a4d02afd-7b1e-4adb-8e20-77be2b52e53e", 00:12:12.161 "is_configured": true, 00:12:12.161 "data_offset": 2048, 00:12:12.161 "data_size": 63488 00:12:12.161 } 00:12:12.161 ] 00:12:12.161 }' 00:12:12.161 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.161 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.728 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:12.728 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:12.728 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:12.728 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:12.728 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:12.728 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:12.728 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:12.728 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:12.728 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.728 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.728 [2024-11-20 14:22:51.536513] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:12.728 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.728 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:12.728 "name": "Existed_Raid", 00:12:12.728 "aliases": [ 00:12:12.728 "049cf621-ea59-466a-a868-1b17065f7d1f" 00:12:12.728 ], 00:12:12.728 "product_name": "Raid Volume", 00:12:12.728 "block_size": 512, 00:12:12.728 "num_blocks": 253952, 00:12:12.728 "uuid": "049cf621-ea59-466a-a868-1b17065f7d1f", 00:12:12.728 "assigned_rate_limits": { 00:12:12.728 "rw_ios_per_sec": 0, 00:12:12.728 "rw_mbytes_per_sec": 0, 00:12:12.728 "r_mbytes_per_sec": 0, 00:12:12.728 "w_mbytes_per_sec": 0 00:12:12.728 }, 00:12:12.728 "claimed": false, 00:12:12.728 "zoned": false, 00:12:12.728 "supported_io_types": { 00:12:12.728 "read": true, 00:12:12.728 "write": true, 00:12:12.728 "unmap": true, 00:12:12.728 "flush": true, 00:12:12.728 "reset": true, 00:12:12.728 "nvme_admin": false, 00:12:12.728 "nvme_io": false, 00:12:12.728 "nvme_io_md": false, 00:12:12.728 "write_zeroes": true, 00:12:12.728 "zcopy": false, 00:12:12.728 "get_zone_info": false, 00:12:12.728 "zone_management": false, 00:12:12.728 "zone_append": false, 00:12:12.728 "compare": false, 00:12:12.728 "compare_and_write": false, 00:12:12.728 "abort": false, 00:12:12.728 "seek_hole": false, 00:12:12.728 "seek_data": false, 00:12:12.728 "copy": false, 00:12:12.728 "nvme_iov_md": false 00:12:12.728 }, 00:12:12.728 "memory_domains": [ 00:12:12.728 { 00:12:12.728 "dma_device_id": "system", 00:12:12.728 "dma_device_type": 1 00:12:12.728 }, 00:12:12.728 { 00:12:12.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.728 "dma_device_type": 2 00:12:12.728 }, 00:12:12.728 { 00:12:12.728 "dma_device_id": "system", 00:12:12.728 "dma_device_type": 1 00:12:12.728 }, 00:12:12.728 { 00:12:12.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.728 "dma_device_type": 2 00:12:12.728 }, 00:12:12.728 { 00:12:12.728 "dma_device_id": "system", 00:12:12.728 "dma_device_type": 1 00:12:12.728 }, 00:12:12.728 { 00:12:12.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.728 "dma_device_type": 2 00:12:12.728 }, 00:12:12.728 { 00:12:12.728 "dma_device_id": "system", 00:12:12.728 "dma_device_type": 1 00:12:12.728 }, 00:12:12.728 { 00:12:12.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.728 "dma_device_type": 2 00:12:12.728 } 00:12:12.728 ], 00:12:12.728 "driver_specific": { 00:12:12.728 "raid": { 00:12:12.728 "uuid": "049cf621-ea59-466a-a868-1b17065f7d1f", 00:12:12.728 "strip_size_kb": 64, 00:12:12.728 "state": "online", 00:12:12.728 "raid_level": "raid0", 00:12:12.728 "superblock": true, 00:12:12.728 "num_base_bdevs": 4, 00:12:12.728 "num_base_bdevs_discovered": 4, 00:12:12.728 "num_base_bdevs_operational": 4, 00:12:12.728 "base_bdevs_list": [ 00:12:12.728 { 00:12:12.728 "name": "NewBaseBdev", 00:12:12.728 "uuid": "aefec344-febb-4ad4-832a-34a7f2ef774d", 00:12:12.728 "is_configured": true, 00:12:12.728 "data_offset": 2048, 00:12:12.728 "data_size": 63488 00:12:12.728 }, 00:12:12.728 { 00:12:12.728 "name": "BaseBdev2", 00:12:12.728 "uuid": "fd7bcf0b-bab7-440f-a7da-36a65c0105bb", 00:12:12.728 "is_configured": true, 00:12:12.728 "data_offset": 2048, 00:12:12.728 "data_size": 63488 00:12:12.728 }, 00:12:12.728 { 00:12:12.728 "name": "BaseBdev3", 00:12:12.728 "uuid": "43a050cf-3cb6-4f76-8177-e9475bf1f90d", 00:12:12.728 "is_configured": true, 00:12:12.728 "data_offset": 2048, 00:12:12.728 "data_size": 63488 00:12:12.728 }, 00:12:12.728 { 00:12:12.728 "name": "BaseBdev4", 00:12:12.728 "uuid": "a4d02afd-7b1e-4adb-8e20-77be2b52e53e", 00:12:12.728 "is_configured": true, 00:12:12.728 "data_offset": 2048, 00:12:12.728 "data_size": 63488 00:12:12.728 } 00:12:12.728 ] 00:12:12.728 } 00:12:12.728 } 00:12:12.728 }' 00:12:12.728 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:12.728 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:12.729 BaseBdev2 00:12:12.729 BaseBdev3 00:12:12.729 BaseBdev4' 00:12:12.729 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.987 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.988 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.988 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:12.988 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.988 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.988 [2024-11-20 14:22:51.952208] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:12.988 [2024-11-20 14:22:51.952251] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:12.988 [2024-11-20 14:22:51.952356] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.988 [2024-11-20 14:22:51.952449] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:12.988 [2024-11-20 14:22:51.952466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:12.988 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.988 14:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70152 00:12:12.988 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70152 ']' 00:12:12.988 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70152 00:12:12.988 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:12.988 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.988 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70152 00:12:13.246 killing process with pid 70152 00:12:13.246 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:13.246 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:13.246 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70152' 00:12:13.246 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70152 00:12:13.246 [2024-11-20 14:22:51.988849] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:13.246 14:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70152 00:12:13.503 [2024-11-20 14:22:52.350100] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:14.437 14:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:14.437 00:12:14.437 real 0m12.830s 00:12:14.437 user 0m21.295s 00:12:14.437 sys 0m1.740s 00:12:14.437 14:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.437 ************************************ 00:12:14.437 END TEST raid_state_function_test_sb 00:12:14.437 ************************************ 00:12:14.437 14:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.696 14:22:53 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:12:14.696 14:22:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:14.696 14:22:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.696 14:22:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:14.696 ************************************ 00:12:14.696 START TEST raid_superblock_test 00:12:14.696 ************************************ 00:12:14.696 14:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70842 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70842 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:14.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70842 ']' 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.697 14:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.697 [2024-11-20 14:22:53.535220] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:12:14.697 [2024-11-20 14:22:53.535614] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70842 ] 00:12:14.955 [2024-11-20 14:22:53.712484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.955 [2024-11-20 14:22:53.850603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.215 [2024-11-20 14:22:54.057235] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.215 [2024-11-20 14:22:54.057527] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.782 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.782 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:15.782 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:15.782 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:15.782 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:15.782 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:15.782 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:15.782 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:15.782 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:15.782 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:15.782 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.783 malloc1 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.783 [2024-11-20 14:22:54.589513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:15.783 [2024-11-20 14:22:54.589733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.783 [2024-11-20 14:22:54.589823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:15.783 [2024-11-20 14:22:54.590086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.783 [2024-11-20 14:22:54.593090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.783 [2024-11-20 14:22:54.593269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:15.783 pt1 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.783 malloc2 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.783 [2024-11-20 14:22:54.646066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:15.783 [2024-11-20 14:22:54.646144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.783 [2024-11-20 14:22:54.646187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:15.783 [2024-11-20 14:22:54.646205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.783 [2024-11-20 14:22:54.648982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.783 [2024-11-20 14:22:54.649049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:15.783 pt2 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.783 malloc3 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.783 [2024-11-20 14:22:54.725230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:15.783 [2024-11-20 14:22:54.725310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.783 [2024-11-20 14:22:54.725350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:15.783 [2024-11-20 14:22:54.725369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.783 [2024-11-20 14:22:54.728886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.783 pt3 00:12:15.783 [2024-11-20 14:22:54.729116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.783 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.042 malloc4 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.042 [2024-11-20 14:22:54.788848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:16.042 [2024-11-20 14:22:54.789163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.042 [2024-11-20 14:22:54.789226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:16.042 [2024-11-20 14:22:54.789251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.042 [2024-11-20 14:22:54.792859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.042 [2024-11-20 14:22:54.793081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:16.042 pt4 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.042 [2024-11-20 14:22:54.797356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:16.042 [2024-11-20 14:22:54.800278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:16.042 [2024-11-20 14:22:54.800583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:16.042 [2024-11-20 14:22:54.800688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:16.042 [2024-11-20 14:22:54.801008] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:16.042 [2024-11-20 14:22:54.801032] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:16.042 [2024-11-20 14:22:54.801426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:16.042 [2024-11-20 14:22:54.801695] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:16.042 [2024-11-20 14:22:54.801722] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:16.042 [2024-11-20 14:22:54.802053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.042 "name": "raid_bdev1", 00:12:16.042 "uuid": "61e3f2da-4565-414f-b72a-b31d521423df", 00:12:16.042 "strip_size_kb": 64, 00:12:16.042 "state": "online", 00:12:16.042 "raid_level": "raid0", 00:12:16.042 "superblock": true, 00:12:16.042 "num_base_bdevs": 4, 00:12:16.042 "num_base_bdevs_discovered": 4, 00:12:16.042 "num_base_bdevs_operational": 4, 00:12:16.042 "base_bdevs_list": [ 00:12:16.042 { 00:12:16.042 "name": "pt1", 00:12:16.042 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:16.042 "is_configured": true, 00:12:16.042 "data_offset": 2048, 00:12:16.042 "data_size": 63488 00:12:16.042 }, 00:12:16.042 { 00:12:16.042 "name": "pt2", 00:12:16.042 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:16.042 "is_configured": true, 00:12:16.042 "data_offset": 2048, 00:12:16.042 "data_size": 63488 00:12:16.042 }, 00:12:16.042 { 00:12:16.042 "name": "pt3", 00:12:16.042 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:16.042 "is_configured": true, 00:12:16.042 "data_offset": 2048, 00:12:16.042 "data_size": 63488 00:12:16.042 }, 00:12:16.042 { 00:12:16.042 "name": "pt4", 00:12:16.042 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:16.042 "is_configured": true, 00:12:16.042 "data_offset": 2048, 00:12:16.042 "data_size": 63488 00:12:16.042 } 00:12:16.042 ] 00:12:16.042 }' 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.042 14:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.610 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:16.610 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:16.610 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:16.610 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:16.610 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:16.610 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:16.610 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:16.610 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.610 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.610 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:16.610 [2024-11-20 14:22:55.326624] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.610 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.610 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:16.610 "name": "raid_bdev1", 00:12:16.610 "aliases": [ 00:12:16.610 "61e3f2da-4565-414f-b72a-b31d521423df" 00:12:16.610 ], 00:12:16.610 "product_name": "Raid Volume", 00:12:16.610 "block_size": 512, 00:12:16.610 "num_blocks": 253952, 00:12:16.610 "uuid": "61e3f2da-4565-414f-b72a-b31d521423df", 00:12:16.610 "assigned_rate_limits": { 00:12:16.610 "rw_ios_per_sec": 0, 00:12:16.610 "rw_mbytes_per_sec": 0, 00:12:16.610 "r_mbytes_per_sec": 0, 00:12:16.610 "w_mbytes_per_sec": 0 00:12:16.610 }, 00:12:16.610 "claimed": false, 00:12:16.610 "zoned": false, 00:12:16.610 "supported_io_types": { 00:12:16.610 "read": true, 00:12:16.610 "write": true, 00:12:16.610 "unmap": true, 00:12:16.610 "flush": true, 00:12:16.610 "reset": true, 00:12:16.610 "nvme_admin": false, 00:12:16.610 "nvme_io": false, 00:12:16.610 "nvme_io_md": false, 00:12:16.610 "write_zeroes": true, 00:12:16.610 "zcopy": false, 00:12:16.610 "get_zone_info": false, 00:12:16.610 "zone_management": false, 00:12:16.610 "zone_append": false, 00:12:16.610 "compare": false, 00:12:16.610 "compare_and_write": false, 00:12:16.610 "abort": false, 00:12:16.610 "seek_hole": false, 00:12:16.610 "seek_data": false, 00:12:16.610 "copy": false, 00:12:16.610 "nvme_iov_md": false 00:12:16.610 }, 00:12:16.610 "memory_domains": [ 00:12:16.610 { 00:12:16.610 "dma_device_id": "system", 00:12:16.610 "dma_device_type": 1 00:12:16.610 }, 00:12:16.610 { 00:12:16.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.610 "dma_device_type": 2 00:12:16.610 }, 00:12:16.610 { 00:12:16.610 "dma_device_id": "system", 00:12:16.610 "dma_device_type": 1 00:12:16.610 }, 00:12:16.610 { 00:12:16.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.610 "dma_device_type": 2 00:12:16.610 }, 00:12:16.610 { 00:12:16.610 "dma_device_id": "system", 00:12:16.610 "dma_device_type": 1 00:12:16.610 }, 00:12:16.610 { 00:12:16.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.610 "dma_device_type": 2 00:12:16.610 }, 00:12:16.610 { 00:12:16.610 "dma_device_id": "system", 00:12:16.610 "dma_device_type": 1 00:12:16.610 }, 00:12:16.610 { 00:12:16.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.610 "dma_device_type": 2 00:12:16.610 } 00:12:16.610 ], 00:12:16.610 "driver_specific": { 00:12:16.610 "raid": { 00:12:16.610 "uuid": "61e3f2da-4565-414f-b72a-b31d521423df", 00:12:16.610 "strip_size_kb": 64, 00:12:16.610 "state": "online", 00:12:16.610 "raid_level": "raid0", 00:12:16.610 "superblock": true, 00:12:16.610 "num_base_bdevs": 4, 00:12:16.610 "num_base_bdevs_discovered": 4, 00:12:16.610 "num_base_bdevs_operational": 4, 00:12:16.610 "base_bdevs_list": [ 00:12:16.610 { 00:12:16.610 "name": "pt1", 00:12:16.610 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:16.610 "is_configured": true, 00:12:16.610 "data_offset": 2048, 00:12:16.610 "data_size": 63488 00:12:16.610 }, 00:12:16.610 { 00:12:16.610 "name": "pt2", 00:12:16.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:16.610 "is_configured": true, 00:12:16.610 "data_offset": 2048, 00:12:16.610 "data_size": 63488 00:12:16.610 }, 00:12:16.610 { 00:12:16.610 "name": "pt3", 00:12:16.610 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:16.610 "is_configured": true, 00:12:16.610 "data_offset": 2048, 00:12:16.610 "data_size": 63488 00:12:16.610 }, 00:12:16.610 { 00:12:16.610 "name": "pt4", 00:12:16.610 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:16.611 "is_configured": true, 00:12:16.611 "data_offset": 2048, 00:12:16.611 "data_size": 63488 00:12:16.611 } 00:12:16.611 ] 00:12:16.611 } 00:12:16.611 } 00:12:16.611 }' 00:12:16.611 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:16.611 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:16.611 pt2 00:12:16.611 pt3 00:12:16.611 pt4' 00:12:16.611 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.611 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:16.611 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.611 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:16.611 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.611 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.611 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.611 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.611 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.611 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.611 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.611 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:16.611 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.611 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.611 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.611 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.870 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.870 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.870 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.870 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:16.870 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.870 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.870 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.870 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.870 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.870 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.870 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.870 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:16.870 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.870 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.870 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.870 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.870 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.870 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.870 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:16.870 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.870 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:16.870 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.870 [2024-11-20 14:22:55.726623] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.870 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.870 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=61e3f2da-4565-414f-b72a-b31d521423df 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 61e3f2da-4565-414f-b72a-b31d521423df ']' 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.871 [2024-11-20 14:22:55.766262] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:16.871 [2024-11-20 14:22:55.766294] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:16.871 [2024-11-20 14:22:55.766392] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:16.871 [2024-11-20 14:22:55.766482] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:16.871 [2024-11-20 14:22:55.766506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.871 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.129 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.129 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:17.129 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.129 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.129 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:17.129 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.129 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:17.129 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:17.129 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:17.129 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:17.129 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:17.129 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:17.129 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:17.130 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:17.130 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:17.130 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.130 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.130 [2024-11-20 14:22:55.914323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:17.130 [2024-11-20 14:22:55.916790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:17.130 [2024-11-20 14:22:55.916853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:17.130 [2024-11-20 14:22:55.916907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:17.130 [2024-11-20 14:22:55.916980] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:17.130 [2024-11-20 14:22:55.917066] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:17.130 [2024-11-20 14:22:55.917100] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:17.130 [2024-11-20 14:22:55.917132] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:17.130 [2024-11-20 14:22:55.917155] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:17.130 [2024-11-20 14:22:55.917174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:17.130 request: 00:12:17.130 { 00:12:17.130 "name": "raid_bdev1", 00:12:17.130 "raid_level": "raid0", 00:12:17.130 "base_bdevs": [ 00:12:17.130 "malloc1", 00:12:17.130 "malloc2", 00:12:17.130 "malloc3", 00:12:17.130 "malloc4" 00:12:17.130 ], 00:12:17.130 "strip_size_kb": 64, 00:12:17.130 "superblock": false, 00:12:17.130 "method": "bdev_raid_create", 00:12:17.130 "req_id": 1 00:12:17.130 } 00:12:17.130 Got JSON-RPC error response 00:12:17.130 response: 00:12:17.130 { 00:12:17.130 "code": -17, 00:12:17.130 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:17.130 } 00:12:17.130 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:17.130 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:17.130 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:17.130 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:17.130 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:17.130 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.130 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.130 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.130 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:17.130 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.130 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:17.130 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:17.130 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:17.130 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.130 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.130 [2024-11-20 14:22:55.994403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:17.130 [2024-11-20 14:22:55.994526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.130 [2024-11-20 14:22:55.994581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:17.130 [2024-11-20 14:22:55.994611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.130 [2024-11-20 14:22:55.998614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.130 [2024-11-20 14:22:55.998684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:17.130 [2024-11-20 14:22:55.998842] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:17.130 [2024-11-20 14:22:55.998968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:17.130 pt1 00:12:17.130 14:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.130 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:17.130 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.130 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.130 14:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:17.130 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.130 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.130 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.130 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.130 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.130 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.130 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.130 14:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.130 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.130 14:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.130 14:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.130 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.130 "name": "raid_bdev1", 00:12:17.130 "uuid": "61e3f2da-4565-414f-b72a-b31d521423df", 00:12:17.130 "strip_size_kb": 64, 00:12:17.130 "state": "configuring", 00:12:17.130 "raid_level": "raid0", 00:12:17.130 "superblock": true, 00:12:17.130 "num_base_bdevs": 4, 00:12:17.130 "num_base_bdevs_discovered": 1, 00:12:17.130 "num_base_bdevs_operational": 4, 00:12:17.130 "base_bdevs_list": [ 00:12:17.130 { 00:12:17.130 "name": "pt1", 00:12:17.130 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:17.130 "is_configured": true, 00:12:17.130 "data_offset": 2048, 00:12:17.130 "data_size": 63488 00:12:17.130 }, 00:12:17.130 { 00:12:17.130 "name": null, 00:12:17.130 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.130 "is_configured": false, 00:12:17.130 "data_offset": 2048, 00:12:17.130 "data_size": 63488 00:12:17.130 }, 00:12:17.130 { 00:12:17.130 "name": null, 00:12:17.130 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:17.130 "is_configured": false, 00:12:17.130 "data_offset": 2048, 00:12:17.130 "data_size": 63488 00:12:17.130 }, 00:12:17.130 { 00:12:17.130 "name": null, 00:12:17.130 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:17.130 "is_configured": false, 00:12:17.130 "data_offset": 2048, 00:12:17.130 "data_size": 63488 00:12:17.130 } 00:12:17.130 ] 00:12:17.130 }' 00:12:17.130 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.130 14:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.698 [2024-11-20 14:22:56.523146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:17.698 [2024-11-20 14:22:56.523438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.698 [2024-11-20 14:22:56.523634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:17.698 [2024-11-20 14:22:56.523819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.698 [2024-11-20 14:22:56.524601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.698 [2024-11-20 14:22:56.524803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:17.698 [2024-11-20 14:22:56.525126] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:17.698 [2024-11-20 14:22:56.525334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:17.698 pt2 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.698 [2024-11-20 14:22:56.531144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.698 "name": "raid_bdev1", 00:12:17.698 "uuid": "61e3f2da-4565-414f-b72a-b31d521423df", 00:12:17.698 "strip_size_kb": 64, 00:12:17.698 "state": "configuring", 00:12:17.698 "raid_level": "raid0", 00:12:17.698 "superblock": true, 00:12:17.698 "num_base_bdevs": 4, 00:12:17.698 "num_base_bdevs_discovered": 1, 00:12:17.698 "num_base_bdevs_operational": 4, 00:12:17.698 "base_bdevs_list": [ 00:12:17.698 { 00:12:17.698 "name": "pt1", 00:12:17.698 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:17.698 "is_configured": true, 00:12:17.698 "data_offset": 2048, 00:12:17.698 "data_size": 63488 00:12:17.698 }, 00:12:17.698 { 00:12:17.698 "name": null, 00:12:17.698 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.698 "is_configured": false, 00:12:17.698 "data_offset": 0, 00:12:17.698 "data_size": 63488 00:12:17.698 }, 00:12:17.698 { 00:12:17.698 "name": null, 00:12:17.698 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:17.698 "is_configured": false, 00:12:17.698 "data_offset": 2048, 00:12:17.698 "data_size": 63488 00:12:17.698 }, 00:12:17.698 { 00:12:17.698 "name": null, 00:12:17.698 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:17.698 "is_configured": false, 00:12:17.698 "data_offset": 2048, 00:12:17.698 "data_size": 63488 00:12:17.698 } 00:12:17.698 ] 00:12:17.698 }' 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.698 14:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.266 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:18.266 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:18.266 14:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:18.266 14:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.266 14:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.266 [2024-11-20 14:22:56.999241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:18.266 [2024-11-20 14:22:56.999339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.266 [2024-11-20 14:22:56.999371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:18.266 [2024-11-20 14:22:56.999387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.266 [2024-11-20 14:22:56.999970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.266 [2024-11-20 14:22:57.000013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:18.266 [2024-11-20 14:22:57.000121] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:18.266 [2024-11-20 14:22:57.000164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:18.266 pt2 00:12:18.266 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.266 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:18.266 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:18.266 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:18.266 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.266 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.266 [2024-11-20 14:22:57.007210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:18.266 [2024-11-20 14:22:57.007268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.266 [2024-11-20 14:22:57.007302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:18.266 [2024-11-20 14:22:57.007317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.266 [2024-11-20 14:22:57.007785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.266 [2024-11-20 14:22:57.007827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:18.266 [2024-11-20 14:22:57.007913] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:18.266 [2024-11-20 14:22:57.007950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:18.266 pt3 00:12:18.266 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.266 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:18.266 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:18.266 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:18.266 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.266 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.266 [2024-11-20 14:22:57.015178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:18.266 [2024-11-20 14:22:57.015234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.266 [2024-11-20 14:22:57.015262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:18.266 [2024-11-20 14:22:57.015276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.266 [2024-11-20 14:22:57.015761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.266 [2024-11-20 14:22:57.015803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:18.266 [2024-11-20 14:22:57.015889] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:18.266 [2024-11-20 14:22:57.015923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:18.266 [2024-11-20 14:22:57.016108] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:18.266 [2024-11-20 14:22:57.016125] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:18.266 [2024-11-20 14:22:57.016425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:18.266 [2024-11-20 14:22:57.016626] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:18.266 [2024-11-20 14:22:57.016650] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:18.266 [2024-11-20 14:22:57.016807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.266 pt4 00:12:18.266 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.266 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:18.267 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:18.267 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:18.267 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.267 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.267 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:18.267 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.267 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.267 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.267 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.267 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.267 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.267 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.267 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.267 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.267 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.267 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.267 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.267 "name": "raid_bdev1", 00:12:18.267 "uuid": "61e3f2da-4565-414f-b72a-b31d521423df", 00:12:18.267 "strip_size_kb": 64, 00:12:18.267 "state": "online", 00:12:18.267 "raid_level": "raid0", 00:12:18.267 "superblock": true, 00:12:18.267 "num_base_bdevs": 4, 00:12:18.267 "num_base_bdevs_discovered": 4, 00:12:18.267 "num_base_bdevs_operational": 4, 00:12:18.267 "base_bdevs_list": [ 00:12:18.267 { 00:12:18.267 "name": "pt1", 00:12:18.267 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:18.267 "is_configured": true, 00:12:18.267 "data_offset": 2048, 00:12:18.267 "data_size": 63488 00:12:18.267 }, 00:12:18.267 { 00:12:18.267 "name": "pt2", 00:12:18.267 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:18.267 "is_configured": true, 00:12:18.267 "data_offset": 2048, 00:12:18.267 "data_size": 63488 00:12:18.267 }, 00:12:18.267 { 00:12:18.267 "name": "pt3", 00:12:18.267 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:18.267 "is_configured": true, 00:12:18.267 "data_offset": 2048, 00:12:18.267 "data_size": 63488 00:12:18.267 }, 00:12:18.267 { 00:12:18.267 "name": "pt4", 00:12:18.267 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:18.267 "is_configured": true, 00:12:18.267 "data_offset": 2048, 00:12:18.267 "data_size": 63488 00:12:18.267 } 00:12:18.267 ] 00:12:18.267 }' 00:12:18.267 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.267 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.526 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:18.526 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:18.526 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:18.526 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:18.526 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:18.526 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:18.526 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:18.526 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.526 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:18.526 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.526 [2024-11-20 14:22:57.483817] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.526 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.786 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:18.786 "name": "raid_bdev1", 00:12:18.786 "aliases": [ 00:12:18.786 "61e3f2da-4565-414f-b72a-b31d521423df" 00:12:18.786 ], 00:12:18.786 "product_name": "Raid Volume", 00:12:18.786 "block_size": 512, 00:12:18.786 "num_blocks": 253952, 00:12:18.786 "uuid": "61e3f2da-4565-414f-b72a-b31d521423df", 00:12:18.786 "assigned_rate_limits": { 00:12:18.786 "rw_ios_per_sec": 0, 00:12:18.786 "rw_mbytes_per_sec": 0, 00:12:18.786 "r_mbytes_per_sec": 0, 00:12:18.786 "w_mbytes_per_sec": 0 00:12:18.786 }, 00:12:18.786 "claimed": false, 00:12:18.786 "zoned": false, 00:12:18.786 "supported_io_types": { 00:12:18.786 "read": true, 00:12:18.786 "write": true, 00:12:18.786 "unmap": true, 00:12:18.786 "flush": true, 00:12:18.786 "reset": true, 00:12:18.786 "nvme_admin": false, 00:12:18.786 "nvme_io": false, 00:12:18.786 "nvme_io_md": false, 00:12:18.786 "write_zeroes": true, 00:12:18.786 "zcopy": false, 00:12:18.786 "get_zone_info": false, 00:12:18.786 "zone_management": false, 00:12:18.786 "zone_append": false, 00:12:18.786 "compare": false, 00:12:18.786 "compare_and_write": false, 00:12:18.786 "abort": false, 00:12:18.786 "seek_hole": false, 00:12:18.786 "seek_data": false, 00:12:18.786 "copy": false, 00:12:18.786 "nvme_iov_md": false 00:12:18.786 }, 00:12:18.786 "memory_domains": [ 00:12:18.786 { 00:12:18.786 "dma_device_id": "system", 00:12:18.786 "dma_device_type": 1 00:12:18.786 }, 00:12:18.786 { 00:12:18.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.786 "dma_device_type": 2 00:12:18.786 }, 00:12:18.786 { 00:12:18.786 "dma_device_id": "system", 00:12:18.786 "dma_device_type": 1 00:12:18.787 }, 00:12:18.787 { 00:12:18.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.787 "dma_device_type": 2 00:12:18.787 }, 00:12:18.787 { 00:12:18.787 "dma_device_id": "system", 00:12:18.787 "dma_device_type": 1 00:12:18.787 }, 00:12:18.787 { 00:12:18.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.787 "dma_device_type": 2 00:12:18.787 }, 00:12:18.787 { 00:12:18.787 "dma_device_id": "system", 00:12:18.787 "dma_device_type": 1 00:12:18.787 }, 00:12:18.787 { 00:12:18.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.787 "dma_device_type": 2 00:12:18.787 } 00:12:18.787 ], 00:12:18.787 "driver_specific": { 00:12:18.787 "raid": { 00:12:18.787 "uuid": "61e3f2da-4565-414f-b72a-b31d521423df", 00:12:18.787 "strip_size_kb": 64, 00:12:18.787 "state": "online", 00:12:18.787 "raid_level": "raid0", 00:12:18.787 "superblock": true, 00:12:18.787 "num_base_bdevs": 4, 00:12:18.787 "num_base_bdevs_discovered": 4, 00:12:18.787 "num_base_bdevs_operational": 4, 00:12:18.787 "base_bdevs_list": [ 00:12:18.787 { 00:12:18.787 "name": "pt1", 00:12:18.787 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:18.787 "is_configured": true, 00:12:18.787 "data_offset": 2048, 00:12:18.787 "data_size": 63488 00:12:18.787 }, 00:12:18.787 { 00:12:18.787 "name": "pt2", 00:12:18.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:18.787 "is_configured": true, 00:12:18.787 "data_offset": 2048, 00:12:18.787 "data_size": 63488 00:12:18.787 }, 00:12:18.787 { 00:12:18.787 "name": "pt3", 00:12:18.787 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:18.787 "is_configured": true, 00:12:18.787 "data_offset": 2048, 00:12:18.787 "data_size": 63488 00:12:18.787 }, 00:12:18.787 { 00:12:18.787 "name": "pt4", 00:12:18.787 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:18.787 "is_configured": true, 00:12:18.787 "data_offset": 2048, 00:12:18.787 "data_size": 63488 00:12:18.787 } 00:12:18.787 ] 00:12:18.787 } 00:12:18.787 } 00:12:18.787 }' 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:18.787 pt2 00:12:18.787 pt3 00:12:18.787 pt4' 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.787 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:19.046 [2024-11-20 14:22:57.851862] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 61e3f2da-4565-414f-b72a-b31d521423df '!=' 61e3f2da-4565-414f-b72a-b31d521423df ']' 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70842 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70842 ']' 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70842 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70842 00:12:19.046 killing process with pid 70842 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70842' 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70842 00:12:19.046 14:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70842 00:12:19.046 [2024-11-20 14:22:57.925528] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:19.046 [2024-11-20 14:22:57.925642] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:19.046 [2024-11-20 14:22:57.925743] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:19.046 [2024-11-20 14:22:57.925767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:19.707 [2024-11-20 14:22:58.291511] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:20.659 ************************************ 00:12:20.659 END TEST raid_superblock_test 00:12:20.659 ************************************ 00:12:20.659 14:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:20.659 00:12:20.659 real 0m5.886s 00:12:20.659 user 0m8.819s 00:12:20.659 sys 0m0.802s 00:12:20.659 14:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.659 14:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.659 14:22:59 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:12:20.659 14:22:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:20.659 14:22:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.659 14:22:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:20.659 ************************************ 00:12:20.659 START TEST raid_read_error_test 00:12:20.659 ************************************ 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WHZCB7ovOn 00:12:20.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71104 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71104 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71104 ']' 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.659 14:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.660 14:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.660 [2024-11-20 14:22:59.524030] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:12:20.660 [2024-11-20 14:22:59.524201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71104 ] 00:12:20.917 [2024-11-20 14:22:59.708825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.917 [2024-11-20 14:22:59.854795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.174 [2024-11-20 14:23:00.073516] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.174 [2024-11-20 14:23:00.073575] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.739 BaseBdev1_malloc 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.739 true 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.739 [2024-11-20 14:23:00.604421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:21.739 [2024-11-20 14:23:00.604493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.739 [2024-11-20 14:23:00.604524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:21.739 [2024-11-20 14:23:00.604543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.739 [2024-11-20 14:23:00.607541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.739 [2024-11-20 14:23:00.607596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:21.739 BaseBdev1 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.739 BaseBdev2_malloc 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.739 true 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.739 [2024-11-20 14:23:00.664555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:21.739 [2024-11-20 14:23:00.664626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.739 [2024-11-20 14:23:00.664653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:21.739 [2024-11-20 14:23:00.664671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.739 [2024-11-20 14:23:00.667640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.739 [2024-11-20 14:23:00.667707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:21.739 BaseBdev2 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.739 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.068 BaseBdev3_malloc 00:12:22.068 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.068 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:22.068 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.068 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.068 true 00:12:22.068 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.068 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:22.068 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.068 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.068 [2024-11-20 14:23:00.732469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:22.068 [2024-11-20 14:23:00.732538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.068 [2024-11-20 14:23:00.732567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:22.068 [2024-11-20 14:23:00.732586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.068 [2024-11-20 14:23:00.735386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.068 [2024-11-20 14:23:00.735449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:22.068 BaseBdev3 00:12:22.068 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.068 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:22.068 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:22.068 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.068 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.068 BaseBdev4_malloc 00:12:22.068 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.068 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:22.068 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.068 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.068 true 00:12:22.068 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.068 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:22.068 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.068 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.068 [2024-11-20 14:23:00.788426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:22.068 [2024-11-20 14:23:00.788496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.068 [2024-11-20 14:23:00.788523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:22.068 [2024-11-20 14:23:00.788542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.068 [2024-11-20 14:23:00.791369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.069 [2024-11-20 14:23:00.791424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:22.069 BaseBdev4 00:12:22.069 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.069 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:22.069 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.069 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.069 [2024-11-20 14:23:00.796504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:22.069 [2024-11-20 14:23:00.798916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:22.069 [2024-11-20 14:23:00.799204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:22.069 [2024-11-20 14:23:00.799326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:22.069 [2024-11-20 14:23:00.799646] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:22.069 [2024-11-20 14:23:00.799673] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:22.069 [2024-11-20 14:23:00.800009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:22.069 [2024-11-20 14:23:00.800233] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:22.069 [2024-11-20 14:23:00.800252] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:22.069 [2024-11-20 14:23:00.800497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.069 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.069 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:22.069 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.069 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.069 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:22.069 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.069 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.069 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.069 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.069 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.069 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.069 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.069 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.069 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.069 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.069 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.069 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.069 "name": "raid_bdev1", 00:12:22.069 "uuid": "126d4494-3603-4647-b9ab-17922ec69386", 00:12:22.069 "strip_size_kb": 64, 00:12:22.069 "state": "online", 00:12:22.069 "raid_level": "raid0", 00:12:22.069 "superblock": true, 00:12:22.069 "num_base_bdevs": 4, 00:12:22.069 "num_base_bdevs_discovered": 4, 00:12:22.069 "num_base_bdevs_operational": 4, 00:12:22.069 "base_bdevs_list": [ 00:12:22.069 { 00:12:22.069 "name": "BaseBdev1", 00:12:22.069 "uuid": "01d1348f-5af6-51d7-abbf-49a5bdab20eb", 00:12:22.069 "is_configured": true, 00:12:22.069 "data_offset": 2048, 00:12:22.069 "data_size": 63488 00:12:22.069 }, 00:12:22.069 { 00:12:22.069 "name": "BaseBdev2", 00:12:22.069 "uuid": "d8d45346-7ed4-5643-92d0-2ec584a2c702", 00:12:22.069 "is_configured": true, 00:12:22.069 "data_offset": 2048, 00:12:22.069 "data_size": 63488 00:12:22.069 }, 00:12:22.069 { 00:12:22.069 "name": "BaseBdev3", 00:12:22.069 "uuid": "22a149b3-52b3-5fcb-b637-013515082dd8", 00:12:22.069 "is_configured": true, 00:12:22.069 "data_offset": 2048, 00:12:22.069 "data_size": 63488 00:12:22.069 }, 00:12:22.069 { 00:12:22.069 "name": "BaseBdev4", 00:12:22.069 "uuid": "8dff989b-63dd-5cb6-af5d-9840b4fa34b9", 00:12:22.069 "is_configured": true, 00:12:22.069 "data_offset": 2048, 00:12:22.069 "data_size": 63488 00:12:22.069 } 00:12:22.069 ] 00:12:22.069 }' 00:12:22.069 14:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.069 14:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.327 14:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:22.327 14:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:22.585 [2024-11-20 14:23:01.406352] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.518 "name": "raid_bdev1", 00:12:23.518 "uuid": "126d4494-3603-4647-b9ab-17922ec69386", 00:12:23.518 "strip_size_kb": 64, 00:12:23.518 "state": "online", 00:12:23.518 "raid_level": "raid0", 00:12:23.518 "superblock": true, 00:12:23.518 "num_base_bdevs": 4, 00:12:23.518 "num_base_bdevs_discovered": 4, 00:12:23.518 "num_base_bdevs_operational": 4, 00:12:23.518 "base_bdevs_list": [ 00:12:23.518 { 00:12:23.518 "name": "BaseBdev1", 00:12:23.518 "uuid": "01d1348f-5af6-51d7-abbf-49a5bdab20eb", 00:12:23.518 "is_configured": true, 00:12:23.518 "data_offset": 2048, 00:12:23.518 "data_size": 63488 00:12:23.518 }, 00:12:23.518 { 00:12:23.518 "name": "BaseBdev2", 00:12:23.518 "uuid": "d8d45346-7ed4-5643-92d0-2ec584a2c702", 00:12:23.518 "is_configured": true, 00:12:23.518 "data_offset": 2048, 00:12:23.518 "data_size": 63488 00:12:23.518 }, 00:12:23.518 { 00:12:23.518 "name": "BaseBdev3", 00:12:23.518 "uuid": "22a149b3-52b3-5fcb-b637-013515082dd8", 00:12:23.518 "is_configured": true, 00:12:23.518 "data_offset": 2048, 00:12:23.518 "data_size": 63488 00:12:23.518 }, 00:12:23.518 { 00:12:23.518 "name": "BaseBdev4", 00:12:23.518 "uuid": "8dff989b-63dd-5cb6-af5d-9840b4fa34b9", 00:12:23.518 "is_configured": true, 00:12:23.518 "data_offset": 2048, 00:12:23.518 "data_size": 63488 00:12:23.518 } 00:12:23.518 ] 00:12:23.518 }' 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.518 14:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.084 14:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:24.084 14:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.084 14:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.084 [2024-11-20 14:23:02.775638] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:24.084 [2024-11-20 14:23:02.775822] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:24.084 [2024-11-20 14:23:02.779287] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.084 [2024-11-20 14:23:02.779496] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.084 [2024-11-20 14:23:02.779605] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:24.084 [2024-11-20 14:23:02.779836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:24.084 { 00:12:24.084 "results": [ 00:12:24.084 { 00:12:24.084 "job": "raid_bdev1", 00:12:24.084 "core_mask": "0x1", 00:12:24.084 "workload": "randrw", 00:12:24.084 "percentage": 50, 00:12:24.084 "status": "finished", 00:12:24.084 "queue_depth": 1, 00:12:24.084 "io_size": 131072, 00:12:24.084 "runtime": 1.366876, 00:12:24.084 "iops": 9906.531389826143, 00:12:24.084 "mibps": 1238.3164237282679, 00:12:24.084 "io_failed": 1, 00:12:24.084 "io_timeout": 0, 00:12:24.084 "avg_latency_us": 140.99241846913978, 00:12:24.084 "min_latency_us": 43.75272727272727, 00:12:24.084 "max_latency_us": 1824.581818181818 00:12:24.084 } 00:12:24.084 ], 00:12:24.084 "core_count": 1 00:12:24.084 } 00:12:24.084 14:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.084 14:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71104 00:12:24.084 14:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71104 ']' 00:12:24.084 14:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71104 00:12:24.084 14:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:24.084 14:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.085 14:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71104 00:12:24.085 killing process with pid 71104 00:12:24.085 14:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:24.085 14:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:24.085 14:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71104' 00:12:24.085 14:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71104 00:12:24.085 [2024-11-20 14:23:02.814461] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:24.085 14:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71104 00:12:24.343 [2024-11-20 14:23:03.097148] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:25.278 14:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WHZCB7ovOn 00:12:25.278 14:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:25.278 14:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:25.278 14:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:25.278 14:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:25.279 ************************************ 00:12:25.279 END TEST raid_read_error_test 00:12:25.279 ************************************ 00:12:25.279 14:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:25.279 14:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:25.279 14:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:25.279 00:12:25.279 real 0m4.827s 00:12:25.279 user 0m5.950s 00:12:25.279 sys 0m0.611s 00:12:25.279 14:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.279 14:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.279 14:23:04 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:12:25.279 14:23:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:25.279 14:23:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.279 14:23:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:25.279 ************************************ 00:12:25.279 START TEST raid_write_error_test 00:12:25.279 ************************************ 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6LpanfFvVY 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71250 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71250 00:12:25.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71250 ']' 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.279 14:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.537 [2024-11-20 14:23:04.361125] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:12:25.537 [2024-11-20 14:23:04.361274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71250 ] 00:12:25.794 [2024-11-20 14:23:04.531761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.794 [2024-11-20 14:23:04.652910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.052 [2024-11-20 14:23:04.853133] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.052 [2024-11-20 14:23:04.853205] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.619 BaseBdev1_malloc 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.619 true 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.619 [2024-11-20 14:23:05.363227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:26.619 [2024-11-20 14:23:05.363296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.619 [2024-11-20 14:23:05.363326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:26.619 [2024-11-20 14:23:05.363345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.619 [2024-11-20 14:23:05.366096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.619 [2024-11-20 14:23:05.366147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:26.619 BaseBdev1 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.619 BaseBdev2_malloc 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.619 true 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.619 [2024-11-20 14:23:05.426634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:26.619 [2024-11-20 14:23:05.426703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.619 [2024-11-20 14:23:05.426729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:26.619 [2024-11-20 14:23:05.426747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.619 [2024-11-20 14:23:05.429475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.619 BaseBdev2 00:12:26.619 [2024-11-20 14:23:05.429658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.619 BaseBdev3_malloc 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.619 true 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.619 [2024-11-20 14:23:05.502453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:26.619 [2024-11-20 14:23:05.502516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.619 [2024-11-20 14:23:05.502544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:26.619 [2024-11-20 14:23:05.502562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.619 [2024-11-20 14:23:05.505319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.619 [2024-11-20 14:23:05.505366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:26.619 BaseBdev3 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.619 BaseBdev4_malloc 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.619 true 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.619 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.619 [2024-11-20 14:23:05.563363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:26.619 [2024-11-20 14:23:05.563424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.620 [2024-11-20 14:23:05.563452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:26.620 [2024-11-20 14:23:05.563469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.620 [2024-11-20 14:23:05.566217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.620 [2024-11-20 14:23:05.566265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:26.620 BaseBdev4 00:12:26.620 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.620 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:26.620 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.620 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.620 [2024-11-20 14:23:05.571441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:26.620 [2024-11-20 14:23:05.573842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:26.620 [2024-11-20 14:23:05.574098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:26.620 [2024-11-20 14:23:05.574213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:26.620 [2024-11-20 14:23:05.574504] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:26.620 [2024-11-20 14:23:05.574530] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:26.620 [2024-11-20 14:23:05.574900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:26.620 [2024-11-20 14:23:05.575181] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:26.620 [2024-11-20 14:23:05.575208] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:26.620 [2024-11-20 14:23:05.575453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.620 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.620 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:26.620 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.620 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.620 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:26.620 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.620 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.620 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.620 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.620 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.620 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.620 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.620 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.620 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.620 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.620 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.878 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.878 "name": "raid_bdev1", 00:12:26.878 "uuid": "0fdd557d-17f7-4529-924a-e2e79d231e32", 00:12:26.878 "strip_size_kb": 64, 00:12:26.878 "state": "online", 00:12:26.878 "raid_level": "raid0", 00:12:26.878 "superblock": true, 00:12:26.878 "num_base_bdevs": 4, 00:12:26.878 "num_base_bdevs_discovered": 4, 00:12:26.878 "num_base_bdevs_operational": 4, 00:12:26.878 "base_bdevs_list": [ 00:12:26.878 { 00:12:26.878 "name": "BaseBdev1", 00:12:26.878 "uuid": "3415d7e1-7f79-5e67-9216-b41babec5830", 00:12:26.878 "is_configured": true, 00:12:26.878 "data_offset": 2048, 00:12:26.878 "data_size": 63488 00:12:26.878 }, 00:12:26.878 { 00:12:26.878 "name": "BaseBdev2", 00:12:26.878 "uuid": "fae04c34-c9d9-51a1-9153-fa802818e149", 00:12:26.878 "is_configured": true, 00:12:26.878 "data_offset": 2048, 00:12:26.878 "data_size": 63488 00:12:26.878 }, 00:12:26.878 { 00:12:26.878 "name": "BaseBdev3", 00:12:26.878 "uuid": "e258732c-b3b2-5472-afcd-3ed75900b6a3", 00:12:26.878 "is_configured": true, 00:12:26.878 "data_offset": 2048, 00:12:26.878 "data_size": 63488 00:12:26.878 }, 00:12:26.878 { 00:12:26.878 "name": "BaseBdev4", 00:12:26.878 "uuid": "d13f3f95-84b4-508e-a8c7-e7e969b92098", 00:12:26.878 "is_configured": true, 00:12:26.878 "data_offset": 2048, 00:12:26.878 "data_size": 63488 00:12:26.878 } 00:12:26.878 ] 00:12:26.878 }' 00:12:26.878 14:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.878 14:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.136 14:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:27.136 14:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:27.394 [2024-11-20 14:23:06.165049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.327 "name": "raid_bdev1", 00:12:28.327 "uuid": "0fdd557d-17f7-4529-924a-e2e79d231e32", 00:12:28.327 "strip_size_kb": 64, 00:12:28.327 "state": "online", 00:12:28.327 "raid_level": "raid0", 00:12:28.327 "superblock": true, 00:12:28.327 "num_base_bdevs": 4, 00:12:28.327 "num_base_bdevs_discovered": 4, 00:12:28.327 "num_base_bdevs_operational": 4, 00:12:28.327 "base_bdevs_list": [ 00:12:28.327 { 00:12:28.327 "name": "BaseBdev1", 00:12:28.327 "uuid": "3415d7e1-7f79-5e67-9216-b41babec5830", 00:12:28.327 "is_configured": true, 00:12:28.327 "data_offset": 2048, 00:12:28.327 "data_size": 63488 00:12:28.327 }, 00:12:28.327 { 00:12:28.327 "name": "BaseBdev2", 00:12:28.327 "uuid": "fae04c34-c9d9-51a1-9153-fa802818e149", 00:12:28.327 "is_configured": true, 00:12:28.327 "data_offset": 2048, 00:12:28.327 "data_size": 63488 00:12:28.327 }, 00:12:28.327 { 00:12:28.327 "name": "BaseBdev3", 00:12:28.327 "uuid": "e258732c-b3b2-5472-afcd-3ed75900b6a3", 00:12:28.327 "is_configured": true, 00:12:28.327 "data_offset": 2048, 00:12:28.327 "data_size": 63488 00:12:28.327 }, 00:12:28.327 { 00:12:28.327 "name": "BaseBdev4", 00:12:28.327 "uuid": "d13f3f95-84b4-508e-a8c7-e7e969b92098", 00:12:28.327 "is_configured": true, 00:12:28.327 "data_offset": 2048, 00:12:28.327 "data_size": 63488 00:12:28.327 } 00:12:28.327 ] 00:12:28.327 }' 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.327 14:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.893 14:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:28.893 14:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.893 14:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.893 [2024-11-20 14:23:07.607377] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:28.893 [2024-11-20 14:23:07.607416] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:28.893 [2024-11-20 14:23:07.610756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.893 [2024-11-20 14:23:07.610829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.893 [2024-11-20 14:23:07.610889] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.893 [2024-11-20 14:23:07.610907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:28.893 { 00:12:28.893 "results": [ 00:12:28.893 { 00:12:28.893 "job": "raid_bdev1", 00:12:28.893 "core_mask": "0x1", 00:12:28.893 "workload": "randrw", 00:12:28.893 "percentage": 50, 00:12:28.893 "status": "finished", 00:12:28.893 "queue_depth": 1, 00:12:28.893 "io_size": 131072, 00:12:28.893 "runtime": 1.43977, 00:12:28.893 "iops": 10799.641609423727, 00:12:28.893 "mibps": 1349.9552011779658, 00:12:28.893 "io_failed": 1, 00:12:28.893 "io_timeout": 0, 00:12:28.893 "avg_latency_us": 128.72144706226248, 00:12:28.893 "min_latency_us": 42.82181818181818, 00:12:28.893 "max_latency_us": 2219.287272727273 00:12:28.893 } 00:12:28.893 ], 00:12:28.893 "core_count": 1 00:12:28.893 } 00:12:28.893 14:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.893 14:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71250 00:12:28.893 14:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71250 ']' 00:12:28.893 14:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71250 00:12:28.893 14:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:28.894 14:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:28.894 14:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71250 00:12:28.894 killing process with pid 71250 00:12:28.894 14:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:28.894 14:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:28.894 14:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71250' 00:12:28.894 14:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71250 00:12:28.894 [2024-11-20 14:23:07.645547] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:28.894 14:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71250 00:12:29.155 [2024-11-20 14:23:07.925702] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:30.089 14:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6LpanfFvVY 00:12:30.089 14:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:30.089 14:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:30.089 14:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:12:30.089 14:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:30.089 14:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:30.089 14:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:30.089 14:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:12:30.089 00:12:30.089 real 0m4.770s 00:12:30.089 user 0m5.873s 00:12:30.089 sys 0m0.526s 00:12:30.089 14:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.089 ************************************ 00:12:30.089 END TEST raid_write_error_test 00:12:30.089 ************************************ 00:12:30.089 14:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.089 14:23:09 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:30.089 14:23:09 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:12:30.089 14:23:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:30.089 14:23:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.089 14:23:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:30.089 ************************************ 00:12:30.089 START TEST raid_state_function_test 00:12:30.089 ************************************ 00:12:30.089 14:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:12:30.089 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:30.090 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:30.090 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:30.090 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:30.090 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:30.090 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:30.090 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:30.090 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:30.090 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:30.090 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:30.349 Process raid pid: 71400 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71400 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71400' 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71400 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71400 ']' 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.349 14:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.349 [2024-11-20 14:23:09.160583] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:12:30.349 [2024-11-20 14:23:09.160936] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.349 [2024-11-20 14:23:09.329314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.609 [2024-11-20 14:23:09.456085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.866 [2024-11-20 14:23:09.663400] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.866 [2024-11-20 14:23:09.663437] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.433 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.433 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:31.433 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:31.433 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.433 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.433 [2024-11-20 14:23:10.180239] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:31.433 [2024-11-20 14:23:10.180308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:31.433 [2024-11-20 14:23:10.180326] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:31.433 [2024-11-20 14:23:10.180342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:31.433 [2024-11-20 14:23:10.180352] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:31.433 [2024-11-20 14:23:10.180367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:31.433 [2024-11-20 14:23:10.180376] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:31.433 [2024-11-20 14:23:10.180391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:31.433 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.433 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:31.433 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.433 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.433 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:31.433 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.433 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.433 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.433 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.433 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.433 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.433 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.433 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.433 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.433 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.433 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.433 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.433 "name": "Existed_Raid", 00:12:31.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.433 "strip_size_kb": 64, 00:12:31.433 "state": "configuring", 00:12:31.433 "raid_level": "concat", 00:12:31.433 "superblock": false, 00:12:31.433 "num_base_bdevs": 4, 00:12:31.433 "num_base_bdevs_discovered": 0, 00:12:31.433 "num_base_bdevs_operational": 4, 00:12:31.433 "base_bdevs_list": [ 00:12:31.433 { 00:12:31.433 "name": "BaseBdev1", 00:12:31.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.433 "is_configured": false, 00:12:31.433 "data_offset": 0, 00:12:31.433 "data_size": 0 00:12:31.433 }, 00:12:31.433 { 00:12:31.433 "name": "BaseBdev2", 00:12:31.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.433 "is_configured": false, 00:12:31.433 "data_offset": 0, 00:12:31.433 "data_size": 0 00:12:31.433 }, 00:12:31.433 { 00:12:31.433 "name": "BaseBdev3", 00:12:31.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.433 "is_configured": false, 00:12:31.433 "data_offset": 0, 00:12:31.433 "data_size": 0 00:12:31.433 }, 00:12:31.433 { 00:12:31.433 "name": "BaseBdev4", 00:12:31.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.433 "is_configured": false, 00:12:31.433 "data_offset": 0, 00:12:31.433 "data_size": 0 00:12:31.433 } 00:12:31.433 ] 00:12:31.433 }' 00:12:31.433 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.433 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.691 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:31.691 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.691 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.691 [2024-11-20 14:23:10.668308] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:31.691 [2024-11-20 14:23:10.668354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.952 [2024-11-20 14:23:10.676315] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:31.952 [2024-11-20 14:23:10.676367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:31.952 [2024-11-20 14:23:10.676383] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:31.952 [2024-11-20 14:23:10.676399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:31.952 [2024-11-20 14:23:10.676409] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:31.952 [2024-11-20 14:23:10.676423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:31.952 [2024-11-20 14:23:10.676432] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:31.952 [2024-11-20 14:23:10.676446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.952 [2024-11-20 14:23:10.720702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:31.952 BaseBdev1 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.952 [ 00:12:31.952 { 00:12:31.952 "name": "BaseBdev1", 00:12:31.952 "aliases": [ 00:12:31.952 "a38fe05f-94d8-416b-a039-5c0f1e888a8b" 00:12:31.952 ], 00:12:31.952 "product_name": "Malloc disk", 00:12:31.952 "block_size": 512, 00:12:31.952 "num_blocks": 65536, 00:12:31.952 "uuid": "a38fe05f-94d8-416b-a039-5c0f1e888a8b", 00:12:31.952 "assigned_rate_limits": { 00:12:31.952 "rw_ios_per_sec": 0, 00:12:31.952 "rw_mbytes_per_sec": 0, 00:12:31.952 "r_mbytes_per_sec": 0, 00:12:31.952 "w_mbytes_per_sec": 0 00:12:31.952 }, 00:12:31.952 "claimed": true, 00:12:31.952 "claim_type": "exclusive_write", 00:12:31.952 "zoned": false, 00:12:31.952 "supported_io_types": { 00:12:31.952 "read": true, 00:12:31.952 "write": true, 00:12:31.952 "unmap": true, 00:12:31.952 "flush": true, 00:12:31.952 "reset": true, 00:12:31.952 "nvme_admin": false, 00:12:31.952 "nvme_io": false, 00:12:31.952 "nvme_io_md": false, 00:12:31.952 "write_zeroes": true, 00:12:31.952 "zcopy": true, 00:12:31.952 "get_zone_info": false, 00:12:31.952 "zone_management": false, 00:12:31.952 "zone_append": false, 00:12:31.952 "compare": false, 00:12:31.952 "compare_and_write": false, 00:12:31.952 "abort": true, 00:12:31.952 "seek_hole": false, 00:12:31.952 "seek_data": false, 00:12:31.952 "copy": true, 00:12:31.952 "nvme_iov_md": false 00:12:31.952 }, 00:12:31.952 "memory_domains": [ 00:12:31.952 { 00:12:31.952 "dma_device_id": "system", 00:12:31.952 "dma_device_type": 1 00:12:31.952 }, 00:12:31.952 { 00:12:31.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.952 "dma_device_type": 2 00:12:31.952 } 00:12:31.952 ], 00:12:31.952 "driver_specific": {} 00:12:31.952 } 00:12:31.952 ] 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.952 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.953 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.953 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.953 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.953 "name": "Existed_Raid", 00:12:31.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.953 "strip_size_kb": 64, 00:12:31.953 "state": "configuring", 00:12:31.953 "raid_level": "concat", 00:12:31.953 "superblock": false, 00:12:31.953 "num_base_bdevs": 4, 00:12:31.953 "num_base_bdevs_discovered": 1, 00:12:31.953 "num_base_bdevs_operational": 4, 00:12:31.953 "base_bdevs_list": [ 00:12:31.953 { 00:12:31.953 "name": "BaseBdev1", 00:12:31.953 "uuid": "a38fe05f-94d8-416b-a039-5c0f1e888a8b", 00:12:31.953 "is_configured": true, 00:12:31.953 "data_offset": 0, 00:12:31.953 "data_size": 65536 00:12:31.953 }, 00:12:31.953 { 00:12:31.953 "name": "BaseBdev2", 00:12:31.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.953 "is_configured": false, 00:12:31.953 "data_offset": 0, 00:12:31.953 "data_size": 0 00:12:31.953 }, 00:12:31.953 { 00:12:31.953 "name": "BaseBdev3", 00:12:31.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.953 "is_configured": false, 00:12:31.953 "data_offset": 0, 00:12:31.953 "data_size": 0 00:12:31.953 }, 00:12:31.953 { 00:12:31.953 "name": "BaseBdev4", 00:12:31.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.953 "is_configured": false, 00:12:31.953 "data_offset": 0, 00:12:31.953 "data_size": 0 00:12:31.953 } 00:12:31.953 ] 00:12:31.953 }' 00:12:31.953 14:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.953 14:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.520 [2024-11-20 14:23:11.228876] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:32.520 [2024-11-20 14:23:11.228938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.520 [2024-11-20 14:23:11.236927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.520 [2024-11-20 14:23:11.239294] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:32.520 [2024-11-20 14:23:11.239348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:32.520 [2024-11-20 14:23:11.239365] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:32.520 [2024-11-20 14:23:11.239382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:32.520 [2024-11-20 14:23:11.239393] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:32.520 [2024-11-20 14:23:11.239407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.520 "name": "Existed_Raid", 00:12:32.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.520 "strip_size_kb": 64, 00:12:32.520 "state": "configuring", 00:12:32.520 "raid_level": "concat", 00:12:32.520 "superblock": false, 00:12:32.520 "num_base_bdevs": 4, 00:12:32.520 "num_base_bdevs_discovered": 1, 00:12:32.520 "num_base_bdevs_operational": 4, 00:12:32.520 "base_bdevs_list": [ 00:12:32.520 { 00:12:32.520 "name": "BaseBdev1", 00:12:32.520 "uuid": "a38fe05f-94d8-416b-a039-5c0f1e888a8b", 00:12:32.520 "is_configured": true, 00:12:32.520 "data_offset": 0, 00:12:32.520 "data_size": 65536 00:12:32.520 }, 00:12:32.520 { 00:12:32.520 "name": "BaseBdev2", 00:12:32.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.520 "is_configured": false, 00:12:32.520 "data_offset": 0, 00:12:32.520 "data_size": 0 00:12:32.520 }, 00:12:32.520 { 00:12:32.520 "name": "BaseBdev3", 00:12:32.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.520 "is_configured": false, 00:12:32.520 "data_offset": 0, 00:12:32.520 "data_size": 0 00:12:32.520 }, 00:12:32.520 { 00:12:32.520 "name": "BaseBdev4", 00:12:32.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.520 "is_configured": false, 00:12:32.520 "data_offset": 0, 00:12:32.520 "data_size": 0 00:12:32.520 } 00:12:32.520 ] 00:12:32.520 }' 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.520 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.779 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:32.779 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.779 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.779 [2024-11-20 14:23:11.722808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:32.779 BaseBdev2 00:12:32.779 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.779 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:32.779 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:32.779 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:32.779 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:32.779 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:32.779 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:32.779 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:32.779 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.779 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.779 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.779 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:32.779 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.779 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.779 [ 00:12:32.779 { 00:12:32.779 "name": "BaseBdev2", 00:12:32.779 "aliases": [ 00:12:32.779 "0313b22c-0d3a-4a07-b46a-f9956cc78fd2" 00:12:32.779 ], 00:12:32.779 "product_name": "Malloc disk", 00:12:32.779 "block_size": 512, 00:12:32.779 "num_blocks": 65536, 00:12:32.779 "uuid": "0313b22c-0d3a-4a07-b46a-f9956cc78fd2", 00:12:32.779 "assigned_rate_limits": { 00:12:32.779 "rw_ios_per_sec": 0, 00:12:32.779 "rw_mbytes_per_sec": 0, 00:12:32.779 "r_mbytes_per_sec": 0, 00:12:32.779 "w_mbytes_per_sec": 0 00:12:32.780 }, 00:12:32.780 "claimed": true, 00:12:32.780 "claim_type": "exclusive_write", 00:12:32.780 "zoned": false, 00:12:32.780 "supported_io_types": { 00:12:32.780 "read": true, 00:12:32.780 "write": true, 00:12:32.780 "unmap": true, 00:12:32.780 "flush": true, 00:12:32.780 "reset": true, 00:12:32.780 "nvme_admin": false, 00:12:32.780 "nvme_io": false, 00:12:32.780 "nvme_io_md": false, 00:12:32.780 "write_zeroes": true, 00:12:32.780 "zcopy": true, 00:12:32.780 "get_zone_info": false, 00:12:32.780 "zone_management": false, 00:12:32.780 "zone_append": false, 00:12:32.780 "compare": false, 00:12:32.780 "compare_and_write": false, 00:12:32.780 "abort": true, 00:12:32.780 "seek_hole": false, 00:12:32.780 "seek_data": false, 00:12:32.780 "copy": true, 00:12:32.780 "nvme_iov_md": false 00:12:32.780 }, 00:12:32.780 "memory_domains": [ 00:12:32.780 { 00:12:32.780 "dma_device_id": "system", 00:12:32.780 "dma_device_type": 1 00:12:32.780 }, 00:12:32.780 { 00:12:32.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.780 "dma_device_type": 2 00:12:32.780 } 00:12:32.780 ], 00:12:32.780 "driver_specific": {} 00:12:32.780 } 00:12:32.780 ] 00:12:32.780 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.780 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:32.780 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:32.780 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:32.780 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:32.780 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.780 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.780 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:33.039 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.039 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.039 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.039 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.039 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.039 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.039 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.039 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.039 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.039 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.039 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.039 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.039 "name": "Existed_Raid", 00:12:33.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.039 "strip_size_kb": 64, 00:12:33.039 "state": "configuring", 00:12:33.039 "raid_level": "concat", 00:12:33.039 "superblock": false, 00:12:33.039 "num_base_bdevs": 4, 00:12:33.039 "num_base_bdevs_discovered": 2, 00:12:33.039 "num_base_bdevs_operational": 4, 00:12:33.039 "base_bdevs_list": [ 00:12:33.039 { 00:12:33.039 "name": "BaseBdev1", 00:12:33.039 "uuid": "a38fe05f-94d8-416b-a039-5c0f1e888a8b", 00:12:33.039 "is_configured": true, 00:12:33.039 "data_offset": 0, 00:12:33.039 "data_size": 65536 00:12:33.039 }, 00:12:33.039 { 00:12:33.039 "name": "BaseBdev2", 00:12:33.039 "uuid": "0313b22c-0d3a-4a07-b46a-f9956cc78fd2", 00:12:33.039 "is_configured": true, 00:12:33.039 "data_offset": 0, 00:12:33.039 "data_size": 65536 00:12:33.039 }, 00:12:33.039 { 00:12:33.039 "name": "BaseBdev3", 00:12:33.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.039 "is_configured": false, 00:12:33.039 "data_offset": 0, 00:12:33.039 "data_size": 0 00:12:33.039 }, 00:12:33.039 { 00:12:33.039 "name": "BaseBdev4", 00:12:33.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.039 "is_configured": false, 00:12:33.039 "data_offset": 0, 00:12:33.039 "data_size": 0 00:12:33.039 } 00:12:33.039 ] 00:12:33.039 }' 00:12:33.039 14:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.039 14:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.298 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:33.298 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.298 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.298 [2024-11-20 14:23:12.269887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:33.298 BaseBdev3 00:12:33.298 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.298 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:33.298 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:33.298 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:33.298 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:33.298 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:33.298 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:33.298 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:33.298 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.298 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.556 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.556 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:33.556 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.556 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.556 [ 00:12:33.556 { 00:12:33.556 "name": "BaseBdev3", 00:12:33.557 "aliases": [ 00:12:33.557 "a630abd1-705e-4cc0-8dc3-f020f3e49104" 00:12:33.557 ], 00:12:33.557 "product_name": "Malloc disk", 00:12:33.557 "block_size": 512, 00:12:33.557 "num_blocks": 65536, 00:12:33.557 "uuid": "a630abd1-705e-4cc0-8dc3-f020f3e49104", 00:12:33.557 "assigned_rate_limits": { 00:12:33.557 "rw_ios_per_sec": 0, 00:12:33.557 "rw_mbytes_per_sec": 0, 00:12:33.557 "r_mbytes_per_sec": 0, 00:12:33.557 "w_mbytes_per_sec": 0 00:12:33.557 }, 00:12:33.557 "claimed": true, 00:12:33.557 "claim_type": "exclusive_write", 00:12:33.557 "zoned": false, 00:12:33.557 "supported_io_types": { 00:12:33.557 "read": true, 00:12:33.557 "write": true, 00:12:33.557 "unmap": true, 00:12:33.557 "flush": true, 00:12:33.557 "reset": true, 00:12:33.557 "nvme_admin": false, 00:12:33.557 "nvme_io": false, 00:12:33.557 "nvme_io_md": false, 00:12:33.557 "write_zeroes": true, 00:12:33.557 "zcopy": true, 00:12:33.557 "get_zone_info": false, 00:12:33.557 "zone_management": false, 00:12:33.557 "zone_append": false, 00:12:33.557 "compare": false, 00:12:33.557 "compare_and_write": false, 00:12:33.557 "abort": true, 00:12:33.557 "seek_hole": false, 00:12:33.557 "seek_data": false, 00:12:33.557 "copy": true, 00:12:33.557 "nvme_iov_md": false 00:12:33.557 }, 00:12:33.557 "memory_domains": [ 00:12:33.557 { 00:12:33.557 "dma_device_id": "system", 00:12:33.557 "dma_device_type": 1 00:12:33.557 }, 00:12:33.557 { 00:12:33.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.557 "dma_device_type": 2 00:12:33.557 } 00:12:33.557 ], 00:12:33.557 "driver_specific": {} 00:12:33.557 } 00:12:33.557 ] 00:12:33.557 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.557 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:33.557 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:33.557 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:33.557 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:33.557 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.557 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.557 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:33.557 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.557 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.557 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.557 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.557 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.557 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.557 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.557 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.557 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.557 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.557 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.557 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.557 "name": "Existed_Raid", 00:12:33.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.557 "strip_size_kb": 64, 00:12:33.557 "state": "configuring", 00:12:33.557 "raid_level": "concat", 00:12:33.557 "superblock": false, 00:12:33.557 "num_base_bdevs": 4, 00:12:33.557 "num_base_bdevs_discovered": 3, 00:12:33.557 "num_base_bdevs_operational": 4, 00:12:33.557 "base_bdevs_list": [ 00:12:33.557 { 00:12:33.557 "name": "BaseBdev1", 00:12:33.557 "uuid": "a38fe05f-94d8-416b-a039-5c0f1e888a8b", 00:12:33.557 "is_configured": true, 00:12:33.557 "data_offset": 0, 00:12:33.557 "data_size": 65536 00:12:33.557 }, 00:12:33.557 { 00:12:33.557 "name": "BaseBdev2", 00:12:33.557 "uuid": "0313b22c-0d3a-4a07-b46a-f9956cc78fd2", 00:12:33.557 "is_configured": true, 00:12:33.557 "data_offset": 0, 00:12:33.557 "data_size": 65536 00:12:33.557 }, 00:12:33.557 { 00:12:33.557 "name": "BaseBdev3", 00:12:33.557 "uuid": "a630abd1-705e-4cc0-8dc3-f020f3e49104", 00:12:33.557 "is_configured": true, 00:12:33.557 "data_offset": 0, 00:12:33.557 "data_size": 65536 00:12:33.557 }, 00:12:33.557 { 00:12:33.557 "name": "BaseBdev4", 00:12:33.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.557 "is_configured": false, 00:12:33.557 "data_offset": 0, 00:12:33.557 "data_size": 0 00:12:33.557 } 00:12:33.557 ] 00:12:33.557 }' 00:12:33.557 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.557 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.849 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:33.849 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.849 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.849 [2024-11-20 14:23:12.792242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:33.849 [2024-11-20 14:23:12.792478] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:33.849 [2024-11-20 14:23:12.792503] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:33.849 [2024-11-20 14:23:12.792855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:33.849 [2024-11-20 14:23:12.793097] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:33.849 [2024-11-20 14:23:12.793128] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:33.849 [2024-11-20 14:23:12.793435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.849 BaseBdev4 00:12:33.849 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.849 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:33.849 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:33.849 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:33.849 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:33.849 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:33.849 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:33.849 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:33.849 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.849 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.849 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.849 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:33.849 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.849 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.849 [ 00:12:33.849 { 00:12:33.849 "name": "BaseBdev4", 00:12:33.849 "aliases": [ 00:12:33.849 "f12ac467-f5fc-4bb7-ae8e-8935466b3abe" 00:12:33.849 ], 00:12:33.849 "product_name": "Malloc disk", 00:12:33.849 "block_size": 512, 00:12:33.849 "num_blocks": 65536, 00:12:33.849 "uuid": "f12ac467-f5fc-4bb7-ae8e-8935466b3abe", 00:12:33.849 "assigned_rate_limits": { 00:12:33.849 "rw_ios_per_sec": 0, 00:12:33.849 "rw_mbytes_per_sec": 0, 00:12:33.849 "r_mbytes_per_sec": 0, 00:12:33.849 "w_mbytes_per_sec": 0 00:12:33.849 }, 00:12:33.849 "claimed": true, 00:12:33.849 "claim_type": "exclusive_write", 00:12:33.849 "zoned": false, 00:12:33.849 "supported_io_types": { 00:12:33.849 "read": true, 00:12:33.849 "write": true, 00:12:33.849 "unmap": true, 00:12:33.849 "flush": true, 00:12:33.849 "reset": true, 00:12:33.850 "nvme_admin": false, 00:12:33.850 "nvme_io": false, 00:12:33.850 "nvme_io_md": false, 00:12:33.850 "write_zeroes": true, 00:12:33.850 "zcopy": true, 00:12:33.850 "get_zone_info": false, 00:12:33.850 "zone_management": false, 00:12:33.850 "zone_append": false, 00:12:33.850 "compare": false, 00:12:33.850 "compare_and_write": false, 00:12:33.850 "abort": true, 00:12:33.850 "seek_hole": false, 00:12:33.850 "seek_data": false, 00:12:33.850 "copy": true, 00:12:33.850 "nvme_iov_md": false 00:12:33.850 }, 00:12:33.850 "memory_domains": [ 00:12:33.850 { 00:12:33.850 "dma_device_id": "system", 00:12:33.850 "dma_device_type": 1 00:12:33.850 }, 00:12:33.850 { 00:12:33.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.850 "dma_device_type": 2 00:12:33.850 } 00:12:33.850 ], 00:12:33.850 "driver_specific": {} 00:12:33.850 } 00:12:33.850 ] 00:12:33.850 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.850 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:33.850 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:33.850 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:33.850 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:33.850 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.850 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.850 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:33.850 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.850 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.850 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.850 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.850 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.850 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.850 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.850 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.850 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.850 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.109 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.109 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.109 "name": "Existed_Raid", 00:12:34.109 "uuid": "e16789f8-0d02-46e1-aa9f-bb2c18f7e7ee", 00:12:34.109 "strip_size_kb": 64, 00:12:34.109 "state": "online", 00:12:34.109 "raid_level": "concat", 00:12:34.109 "superblock": false, 00:12:34.109 "num_base_bdevs": 4, 00:12:34.109 "num_base_bdevs_discovered": 4, 00:12:34.109 "num_base_bdevs_operational": 4, 00:12:34.109 "base_bdevs_list": [ 00:12:34.109 { 00:12:34.109 "name": "BaseBdev1", 00:12:34.109 "uuid": "a38fe05f-94d8-416b-a039-5c0f1e888a8b", 00:12:34.109 "is_configured": true, 00:12:34.109 "data_offset": 0, 00:12:34.109 "data_size": 65536 00:12:34.109 }, 00:12:34.109 { 00:12:34.109 "name": "BaseBdev2", 00:12:34.109 "uuid": "0313b22c-0d3a-4a07-b46a-f9956cc78fd2", 00:12:34.109 "is_configured": true, 00:12:34.109 "data_offset": 0, 00:12:34.109 "data_size": 65536 00:12:34.109 }, 00:12:34.109 { 00:12:34.109 "name": "BaseBdev3", 00:12:34.109 "uuid": "a630abd1-705e-4cc0-8dc3-f020f3e49104", 00:12:34.109 "is_configured": true, 00:12:34.109 "data_offset": 0, 00:12:34.109 "data_size": 65536 00:12:34.109 }, 00:12:34.109 { 00:12:34.109 "name": "BaseBdev4", 00:12:34.109 "uuid": "f12ac467-f5fc-4bb7-ae8e-8935466b3abe", 00:12:34.109 "is_configured": true, 00:12:34.109 "data_offset": 0, 00:12:34.109 "data_size": 65536 00:12:34.109 } 00:12:34.109 ] 00:12:34.109 }' 00:12:34.109 14:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.109 14:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.367 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:34.367 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:34.367 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:34.367 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:34.367 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:34.367 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:34.367 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:34.367 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:34.367 14:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.367 14:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.368 [2024-11-20 14:23:13.316885] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.368 14:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.368 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:34.368 "name": "Existed_Raid", 00:12:34.368 "aliases": [ 00:12:34.368 "e16789f8-0d02-46e1-aa9f-bb2c18f7e7ee" 00:12:34.368 ], 00:12:34.368 "product_name": "Raid Volume", 00:12:34.368 "block_size": 512, 00:12:34.368 "num_blocks": 262144, 00:12:34.368 "uuid": "e16789f8-0d02-46e1-aa9f-bb2c18f7e7ee", 00:12:34.368 "assigned_rate_limits": { 00:12:34.368 "rw_ios_per_sec": 0, 00:12:34.368 "rw_mbytes_per_sec": 0, 00:12:34.368 "r_mbytes_per_sec": 0, 00:12:34.368 "w_mbytes_per_sec": 0 00:12:34.368 }, 00:12:34.368 "claimed": false, 00:12:34.368 "zoned": false, 00:12:34.368 "supported_io_types": { 00:12:34.368 "read": true, 00:12:34.368 "write": true, 00:12:34.368 "unmap": true, 00:12:34.368 "flush": true, 00:12:34.368 "reset": true, 00:12:34.368 "nvme_admin": false, 00:12:34.368 "nvme_io": false, 00:12:34.368 "nvme_io_md": false, 00:12:34.368 "write_zeroes": true, 00:12:34.368 "zcopy": false, 00:12:34.368 "get_zone_info": false, 00:12:34.368 "zone_management": false, 00:12:34.368 "zone_append": false, 00:12:34.368 "compare": false, 00:12:34.368 "compare_and_write": false, 00:12:34.368 "abort": false, 00:12:34.368 "seek_hole": false, 00:12:34.368 "seek_data": false, 00:12:34.368 "copy": false, 00:12:34.368 "nvme_iov_md": false 00:12:34.368 }, 00:12:34.368 "memory_domains": [ 00:12:34.368 { 00:12:34.368 "dma_device_id": "system", 00:12:34.368 "dma_device_type": 1 00:12:34.368 }, 00:12:34.368 { 00:12:34.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.368 "dma_device_type": 2 00:12:34.368 }, 00:12:34.368 { 00:12:34.368 "dma_device_id": "system", 00:12:34.368 "dma_device_type": 1 00:12:34.368 }, 00:12:34.368 { 00:12:34.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.368 "dma_device_type": 2 00:12:34.368 }, 00:12:34.368 { 00:12:34.368 "dma_device_id": "system", 00:12:34.368 "dma_device_type": 1 00:12:34.368 }, 00:12:34.368 { 00:12:34.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.368 "dma_device_type": 2 00:12:34.368 }, 00:12:34.368 { 00:12:34.368 "dma_device_id": "system", 00:12:34.368 "dma_device_type": 1 00:12:34.368 }, 00:12:34.368 { 00:12:34.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.368 "dma_device_type": 2 00:12:34.368 } 00:12:34.368 ], 00:12:34.368 "driver_specific": { 00:12:34.368 "raid": { 00:12:34.368 "uuid": "e16789f8-0d02-46e1-aa9f-bb2c18f7e7ee", 00:12:34.368 "strip_size_kb": 64, 00:12:34.368 "state": "online", 00:12:34.368 "raid_level": "concat", 00:12:34.368 "superblock": false, 00:12:34.368 "num_base_bdevs": 4, 00:12:34.368 "num_base_bdevs_discovered": 4, 00:12:34.368 "num_base_bdevs_operational": 4, 00:12:34.368 "base_bdevs_list": [ 00:12:34.368 { 00:12:34.368 "name": "BaseBdev1", 00:12:34.368 "uuid": "a38fe05f-94d8-416b-a039-5c0f1e888a8b", 00:12:34.368 "is_configured": true, 00:12:34.368 "data_offset": 0, 00:12:34.368 "data_size": 65536 00:12:34.368 }, 00:12:34.368 { 00:12:34.368 "name": "BaseBdev2", 00:12:34.368 "uuid": "0313b22c-0d3a-4a07-b46a-f9956cc78fd2", 00:12:34.368 "is_configured": true, 00:12:34.368 "data_offset": 0, 00:12:34.368 "data_size": 65536 00:12:34.368 }, 00:12:34.368 { 00:12:34.368 "name": "BaseBdev3", 00:12:34.368 "uuid": "a630abd1-705e-4cc0-8dc3-f020f3e49104", 00:12:34.368 "is_configured": true, 00:12:34.368 "data_offset": 0, 00:12:34.368 "data_size": 65536 00:12:34.368 }, 00:12:34.368 { 00:12:34.368 "name": "BaseBdev4", 00:12:34.368 "uuid": "f12ac467-f5fc-4bb7-ae8e-8935466b3abe", 00:12:34.368 "is_configured": true, 00:12:34.368 "data_offset": 0, 00:12:34.368 "data_size": 65536 00:12:34.368 } 00:12:34.368 ] 00:12:34.368 } 00:12:34.368 } 00:12:34.368 }' 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:34.627 BaseBdev2 00:12:34.627 BaseBdev3 00:12:34.627 BaseBdev4' 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.627 14:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.886 [2024-11-20 14:23:13.664581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:34.886 [2024-11-20 14:23:13.664618] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:34.886 [2024-11-20 14:23:13.664681] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.886 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.886 "name": "Existed_Raid", 00:12:34.886 "uuid": "e16789f8-0d02-46e1-aa9f-bb2c18f7e7ee", 00:12:34.886 "strip_size_kb": 64, 00:12:34.886 "state": "offline", 00:12:34.887 "raid_level": "concat", 00:12:34.887 "superblock": false, 00:12:34.887 "num_base_bdevs": 4, 00:12:34.887 "num_base_bdevs_discovered": 3, 00:12:34.887 "num_base_bdevs_operational": 3, 00:12:34.887 "base_bdevs_list": [ 00:12:34.887 { 00:12:34.887 "name": null, 00:12:34.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.887 "is_configured": false, 00:12:34.887 "data_offset": 0, 00:12:34.887 "data_size": 65536 00:12:34.887 }, 00:12:34.887 { 00:12:34.887 "name": "BaseBdev2", 00:12:34.887 "uuid": "0313b22c-0d3a-4a07-b46a-f9956cc78fd2", 00:12:34.887 "is_configured": true, 00:12:34.887 "data_offset": 0, 00:12:34.887 "data_size": 65536 00:12:34.887 }, 00:12:34.887 { 00:12:34.887 "name": "BaseBdev3", 00:12:34.887 "uuid": "a630abd1-705e-4cc0-8dc3-f020f3e49104", 00:12:34.887 "is_configured": true, 00:12:34.887 "data_offset": 0, 00:12:34.887 "data_size": 65536 00:12:34.887 }, 00:12:34.887 { 00:12:34.887 "name": "BaseBdev4", 00:12:34.887 "uuid": "f12ac467-f5fc-4bb7-ae8e-8935466b3abe", 00:12:34.887 "is_configured": true, 00:12:34.887 "data_offset": 0, 00:12:34.887 "data_size": 65536 00:12:34.887 } 00:12:34.887 ] 00:12:34.887 }' 00:12:34.887 14:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.887 14:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.454 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:35.454 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:35.454 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.454 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.454 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.454 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:35.454 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.454 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:35.454 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:35.454 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:35.454 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.454 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.454 [2024-11-20 14:23:14.335312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:35.454 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.454 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:35.454 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:35.454 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:35.454 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.454 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.454 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.712 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.712 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:35.712 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:35.712 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:35.712 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.712 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.712 [2024-11-20 14:23:14.482306] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:35.712 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.712 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:35.712 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:35.712 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.712 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:35.712 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.712 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.712 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.712 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:35.712 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:35.712 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:35.712 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.712 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.712 [2024-11-20 14:23:14.627470] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:35.712 [2024-11-20 14:23:14.627648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.971 BaseBdev2 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.971 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.971 [ 00:12:35.971 { 00:12:35.971 "name": "BaseBdev2", 00:12:35.971 "aliases": [ 00:12:35.971 "d2852015-2530-4a78-8b17-e92103f8e9af" 00:12:35.971 ], 00:12:35.971 "product_name": "Malloc disk", 00:12:35.971 "block_size": 512, 00:12:35.971 "num_blocks": 65536, 00:12:35.971 "uuid": "d2852015-2530-4a78-8b17-e92103f8e9af", 00:12:35.971 "assigned_rate_limits": { 00:12:35.971 "rw_ios_per_sec": 0, 00:12:35.971 "rw_mbytes_per_sec": 0, 00:12:35.971 "r_mbytes_per_sec": 0, 00:12:35.971 "w_mbytes_per_sec": 0 00:12:35.971 }, 00:12:35.971 "claimed": false, 00:12:35.971 "zoned": false, 00:12:35.971 "supported_io_types": { 00:12:35.971 "read": true, 00:12:35.971 "write": true, 00:12:35.971 "unmap": true, 00:12:35.971 "flush": true, 00:12:35.971 "reset": true, 00:12:35.971 "nvme_admin": false, 00:12:35.971 "nvme_io": false, 00:12:35.971 "nvme_io_md": false, 00:12:35.971 "write_zeroes": true, 00:12:35.971 "zcopy": true, 00:12:35.971 "get_zone_info": false, 00:12:35.971 "zone_management": false, 00:12:35.971 "zone_append": false, 00:12:35.971 "compare": false, 00:12:35.971 "compare_and_write": false, 00:12:35.971 "abort": true, 00:12:35.971 "seek_hole": false, 00:12:35.971 "seek_data": false, 00:12:35.971 "copy": true, 00:12:35.971 "nvme_iov_md": false 00:12:35.971 }, 00:12:35.971 "memory_domains": [ 00:12:35.971 { 00:12:35.971 "dma_device_id": "system", 00:12:35.971 "dma_device_type": 1 00:12:35.971 }, 00:12:35.971 { 00:12:35.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.971 "dma_device_type": 2 00:12:35.971 } 00:12:35.971 ], 00:12:35.971 "driver_specific": {} 00:12:35.971 } 00:12:35.972 ] 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.972 BaseBdev3 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.972 [ 00:12:35.972 { 00:12:35.972 "name": "BaseBdev3", 00:12:35.972 "aliases": [ 00:12:35.972 "fc9fd5f9-5ba0-47ec-9237-4e445f8f56c9" 00:12:35.972 ], 00:12:35.972 "product_name": "Malloc disk", 00:12:35.972 "block_size": 512, 00:12:35.972 "num_blocks": 65536, 00:12:35.972 "uuid": "fc9fd5f9-5ba0-47ec-9237-4e445f8f56c9", 00:12:35.972 "assigned_rate_limits": { 00:12:35.972 "rw_ios_per_sec": 0, 00:12:35.972 "rw_mbytes_per_sec": 0, 00:12:35.972 "r_mbytes_per_sec": 0, 00:12:35.972 "w_mbytes_per_sec": 0 00:12:35.972 }, 00:12:35.972 "claimed": false, 00:12:35.972 "zoned": false, 00:12:35.972 "supported_io_types": { 00:12:35.972 "read": true, 00:12:35.972 "write": true, 00:12:35.972 "unmap": true, 00:12:35.972 "flush": true, 00:12:35.972 "reset": true, 00:12:35.972 "nvme_admin": false, 00:12:35.972 "nvme_io": false, 00:12:35.972 "nvme_io_md": false, 00:12:35.972 "write_zeroes": true, 00:12:35.972 "zcopy": true, 00:12:35.972 "get_zone_info": false, 00:12:35.972 "zone_management": false, 00:12:35.972 "zone_append": false, 00:12:35.972 "compare": false, 00:12:35.972 "compare_and_write": false, 00:12:35.972 "abort": true, 00:12:35.972 "seek_hole": false, 00:12:35.972 "seek_data": false, 00:12:35.972 "copy": true, 00:12:35.972 "nvme_iov_md": false 00:12:35.972 }, 00:12:35.972 "memory_domains": [ 00:12:35.972 { 00:12:35.972 "dma_device_id": "system", 00:12:35.972 "dma_device_type": 1 00:12:35.972 }, 00:12:35.972 { 00:12:35.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.972 "dma_device_type": 2 00:12:35.972 } 00:12:35.972 ], 00:12:35.972 "driver_specific": {} 00:12:35.972 } 00:12:35.972 ] 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.972 BaseBdev4 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.972 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.231 [ 00:12:36.231 { 00:12:36.231 "name": "BaseBdev4", 00:12:36.231 "aliases": [ 00:12:36.231 "c039a5e5-ba55-454d-819a-228e2d04d36e" 00:12:36.231 ], 00:12:36.231 "product_name": "Malloc disk", 00:12:36.231 "block_size": 512, 00:12:36.231 "num_blocks": 65536, 00:12:36.231 "uuid": "c039a5e5-ba55-454d-819a-228e2d04d36e", 00:12:36.231 "assigned_rate_limits": { 00:12:36.231 "rw_ios_per_sec": 0, 00:12:36.231 "rw_mbytes_per_sec": 0, 00:12:36.231 "r_mbytes_per_sec": 0, 00:12:36.231 "w_mbytes_per_sec": 0 00:12:36.231 }, 00:12:36.231 "claimed": false, 00:12:36.231 "zoned": false, 00:12:36.231 "supported_io_types": { 00:12:36.231 "read": true, 00:12:36.231 "write": true, 00:12:36.231 "unmap": true, 00:12:36.231 "flush": true, 00:12:36.231 "reset": true, 00:12:36.231 "nvme_admin": false, 00:12:36.231 "nvme_io": false, 00:12:36.231 "nvme_io_md": false, 00:12:36.231 "write_zeroes": true, 00:12:36.231 "zcopy": true, 00:12:36.231 "get_zone_info": false, 00:12:36.231 "zone_management": false, 00:12:36.231 "zone_append": false, 00:12:36.231 "compare": false, 00:12:36.231 "compare_and_write": false, 00:12:36.231 "abort": true, 00:12:36.231 "seek_hole": false, 00:12:36.231 "seek_data": false, 00:12:36.231 "copy": true, 00:12:36.231 "nvme_iov_md": false 00:12:36.231 }, 00:12:36.231 "memory_domains": [ 00:12:36.231 { 00:12:36.231 "dma_device_id": "system", 00:12:36.231 "dma_device_type": 1 00:12:36.231 }, 00:12:36.231 { 00:12:36.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.231 "dma_device_type": 2 00:12:36.231 } 00:12:36.231 ], 00:12:36.231 "driver_specific": {} 00:12:36.231 } 00:12:36.231 ] 00:12:36.231 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.231 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:36.231 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:36.231 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:36.231 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:36.231 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.231 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.231 [2024-11-20 14:23:14.969119] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:36.231 [2024-11-20 14:23:14.969174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:36.231 [2024-11-20 14:23:14.969205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:36.231 [2024-11-20 14:23:14.971617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:36.231 [2024-11-20 14:23:14.971688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:36.231 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.231 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:36.231 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.231 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.231 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:36.231 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:36.231 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.231 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.231 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.231 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.231 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.231 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.231 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.231 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.231 14:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.231 14:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.231 14:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.231 "name": "Existed_Raid", 00:12:36.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.231 "strip_size_kb": 64, 00:12:36.231 "state": "configuring", 00:12:36.231 "raid_level": "concat", 00:12:36.231 "superblock": false, 00:12:36.231 "num_base_bdevs": 4, 00:12:36.231 "num_base_bdevs_discovered": 3, 00:12:36.231 "num_base_bdevs_operational": 4, 00:12:36.231 "base_bdevs_list": [ 00:12:36.231 { 00:12:36.231 "name": "BaseBdev1", 00:12:36.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.231 "is_configured": false, 00:12:36.231 "data_offset": 0, 00:12:36.231 "data_size": 0 00:12:36.231 }, 00:12:36.231 { 00:12:36.231 "name": "BaseBdev2", 00:12:36.231 "uuid": "d2852015-2530-4a78-8b17-e92103f8e9af", 00:12:36.231 "is_configured": true, 00:12:36.231 "data_offset": 0, 00:12:36.231 "data_size": 65536 00:12:36.231 }, 00:12:36.231 { 00:12:36.231 "name": "BaseBdev3", 00:12:36.231 "uuid": "fc9fd5f9-5ba0-47ec-9237-4e445f8f56c9", 00:12:36.231 "is_configured": true, 00:12:36.231 "data_offset": 0, 00:12:36.231 "data_size": 65536 00:12:36.231 }, 00:12:36.231 { 00:12:36.231 "name": "BaseBdev4", 00:12:36.231 "uuid": "c039a5e5-ba55-454d-819a-228e2d04d36e", 00:12:36.231 "is_configured": true, 00:12:36.231 "data_offset": 0, 00:12:36.231 "data_size": 65536 00:12:36.231 } 00:12:36.231 ] 00:12:36.231 }' 00:12:36.231 14:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.231 14:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.490 14:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:36.490 14:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.490 14:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.490 [2024-11-20 14:23:15.469271] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:36.749 14:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.749 14:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:36.749 14:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.749 14:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.749 14:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:36.749 14:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:36.749 14:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.749 14:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.749 14:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.749 14:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.749 14:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.749 14:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.749 14:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.749 14:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.749 14:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.749 14:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.749 14:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.749 "name": "Existed_Raid", 00:12:36.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.749 "strip_size_kb": 64, 00:12:36.749 "state": "configuring", 00:12:36.749 "raid_level": "concat", 00:12:36.749 "superblock": false, 00:12:36.749 "num_base_bdevs": 4, 00:12:36.749 "num_base_bdevs_discovered": 2, 00:12:36.749 "num_base_bdevs_operational": 4, 00:12:36.749 "base_bdevs_list": [ 00:12:36.749 { 00:12:36.749 "name": "BaseBdev1", 00:12:36.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.749 "is_configured": false, 00:12:36.749 "data_offset": 0, 00:12:36.749 "data_size": 0 00:12:36.749 }, 00:12:36.749 { 00:12:36.749 "name": null, 00:12:36.749 "uuid": "d2852015-2530-4a78-8b17-e92103f8e9af", 00:12:36.749 "is_configured": false, 00:12:36.749 "data_offset": 0, 00:12:36.749 "data_size": 65536 00:12:36.749 }, 00:12:36.749 { 00:12:36.749 "name": "BaseBdev3", 00:12:36.749 "uuid": "fc9fd5f9-5ba0-47ec-9237-4e445f8f56c9", 00:12:36.749 "is_configured": true, 00:12:36.749 "data_offset": 0, 00:12:36.749 "data_size": 65536 00:12:36.749 }, 00:12:36.749 { 00:12:36.749 "name": "BaseBdev4", 00:12:36.749 "uuid": "c039a5e5-ba55-454d-819a-228e2d04d36e", 00:12:36.749 "is_configured": true, 00:12:36.749 "data_offset": 0, 00:12:36.749 "data_size": 65536 00:12:36.749 } 00:12:36.749 ] 00:12:36.749 }' 00:12:36.749 14:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.749 14:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.008 14:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.008 14:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:37.008 14:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.008 14:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.008 14:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.267 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:37.267 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:37.267 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.267 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.267 [2024-11-20 14:23:16.046789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:37.267 BaseBdev1 00:12:37.267 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.267 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:37.267 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:37.267 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:37.267 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:37.267 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:37.267 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:37.267 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:37.267 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.267 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.267 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.267 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:37.267 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.267 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.267 [ 00:12:37.267 { 00:12:37.267 "name": "BaseBdev1", 00:12:37.267 "aliases": [ 00:12:37.267 "ea76128e-4db9-4c54-ae2c-af5922c03abb" 00:12:37.267 ], 00:12:37.267 "product_name": "Malloc disk", 00:12:37.267 "block_size": 512, 00:12:37.267 "num_blocks": 65536, 00:12:37.267 "uuid": "ea76128e-4db9-4c54-ae2c-af5922c03abb", 00:12:37.267 "assigned_rate_limits": { 00:12:37.267 "rw_ios_per_sec": 0, 00:12:37.267 "rw_mbytes_per_sec": 0, 00:12:37.267 "r_mbytes_per_sec": 0, 00:12:37.267 "w_mbytes_per_sec": 0 00:12:37.267 }, 00:12:37.267 "claimed": true, 00:12:37.267 "claim_type": "exclusive_write", 00:12:37.267 "zoned": false, 00:12:37.267 "supported_io_types": { 00:12:37.267 "read": true, 00:12:37.267 "write": true, 00:12:37.267 "unmap": true, 00:12:37.267 "flush": true, 00:12:37.267 "reset": true, 00:12:37.267 "nvme_admin": false, 00:12:37.267 "nvme_io": false, 00:12:37.267 "nvme_io_md": false, 00:12:37.267 "write_zeroes": true, 00:12:37.268 "zcopy": true, 00:12:37.268 "get_zone_info": false, 00:12:37.268 "zone_management": false, 00:12:37.268 "zone_append": false, 00:12:37.268 "compare": false, 00:12:37.268 "compare_and_write": false, 00:12:37.268 "abort": true, 00:12:37.268 "seek_hole": false, 00:12:37.268 "seek_data": false, 00:12:37.268 "copy": true, 00:12:37.268 "nvme_iov_md": false 00:12:37.268 }, 00:12:37.268 "memory_domains": [ 00:12:37.268 { 00:12:37.268 "dma_device_id": "system", 00:12:37.268 "dma_device_type": 1 00:12:37.268 }, 00:12:37.268 { 00:12:37.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.268 "dma_device_type": 2 00:12:37.268 } 00:12:37.268 ], 00:12:37.268 "driver_specific": {} 00:12:37.268 } 00:12:37.268 ] 00:12:37.268 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.268 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:37.268 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:37.268 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.268 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.268 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:37.268 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.268 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.268 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.268 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.268 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.268 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.268 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.268 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.268 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.268 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.268 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.268 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.268 "name": "Existed_Raid", 00:12:37.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.268 "strip_size_kb": 64, 00:12:37.268 "state": "configuring", 00:12:37.268 "raid_level": "concat", 00:12:37.268 "superblock": false, 00:12:37.268 "num_base_bdevs": 4, 00:12:37.268 "num_base_bdevs_discovered": 3, 00:12:37.268 "num_base_bdevs_operational": 4, 00:12:37.268 "base_bdevs_list": [ 00:12:37.268 { 00:12:37.268 "name": "BaseBdev1", 00:12:37.268 "uuid": "ea76128e-4db9-4c54-ae2c-af5922c03abb", 00:12:37.268 "is_configured": true, 00:12:37.268 "data_offset": 0, 00:12:37.268 "data_size": 65536 00:12:37.268 }, 00:12:37.268 { 00:12:37.268 "name": null, 00:12:37.268 "uuid": "d2852015-2530-4a78-8b17-e92103f8e9af", 00:12:37.268 "is_configured": false, 00:12:37.268 "data_offset": 0, 00:12:37.268 "data_size": 65536 00:12:37.268 }, 00:12:37.268 { 00:12:37.268 "name": "BaseBdev3", 00:12:37.268 "uuid": "fc9fd5f9-5ba0-47ec-9237-4e445f8f56c9", 00:12:37.268 "is_configured": true, 00:12:37.268 "data_offset": 0, 00:12:37.268 "data_size": 65536 00:12:37.268 }, 00:12:37.268 { 00:12:37.268 "name": "BaseBdev4", 00:12:37.268 "uuid": "c039a5e5-ba55-454d-819a-228e2d04d36e", 00:12:37.268 "is_configured": true, 00:12:37.268 "data_offset": 0, 00:12:37.268 "data_size": 65536 00:12:37.268 } 00:12:37.268 ] 00:12:37.268 }' 00:12:37.268 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.268 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.838 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.838 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.838 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.838 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:37.838 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.838 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:37.838 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:37.838 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.838 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.838 [2024-11-20 14:23:16.647040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:37.838 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.838 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:37.838 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.839 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.839 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:37.839 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.839 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.839 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.839 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.839 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.839 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.839 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.839 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.839 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.839 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.839 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.839 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.839 "name": "Existed_Raid", 00:12:37.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.839 "strip_size_kb": 64, 00:12:37.839 "state": "configuring", 00:12:37.839 "raid_level": "concat", 00:12:37.839 "superblock": false, 00:12:37.839 "num_base_bdevs": 4, 00:12:37.839 "num_base_bdevs_discovered": 2, 00:12:37.839 "num_base_bdevs_operational": 4, 00:12:37.839 "base_bdevs_list": [ 00:12:37.839 { 00:12:37.839 "name": "BaseBdev1", 00:12:37.839 "uuid": "ea76128e-4db9-4c54-ae2c-af5922c03abb", 00:12:37.839 "is_configured": true, 00:12:37.839 "data_offset": 0, 00:12:37.839 "data_size": 65536 00:12:37.839 }, 00:12:37.839 { 00:12:37.839 "name": null, 00:12:37.839 "uuid": "d2852015-2530-4a78-8b17-e92103f8e9af", 00:12:37.839 "is_configured": false, 00:12:37.839 "data_offset": 0, 00:12:37.839 "data_size": 65536 00:12:37.839 }, 00:12:37.839 { 00:12:37.839 "name": null, 00:12:37.839 "uuid": "fc9fd5f9-5ba0-47ec-9237-4e445f8f56c9", 00:12:37.839 "is_configured": false, 00:12:37.839 "data_offset": 0, 00:12:37.839 "data_size": 65536 00:12:37.839 }, 00:12:37.839 { 00:12:37.839 "name": "BaseBdev4", 00:12:37.839 "uuid": "c039a5e5-ba55-454d-819a-228e2d04d36e", 00:12:37.839 "is_configured": true, 00:12:37.839 "data_offset": 0, 00:12:37.839 "data_size": 65536 00:12:37.839 } 00:12:37.839 ] 00:12:37.839 }' 00:12:37.839 14:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.839 14:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.475 [2024-11-20 14:23:17.175154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.475 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.475 "name": "Existed_Raid", 00:12:38.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.475 "strip_size_kb": 64, 00:12:38.475 "state": "configuring", 00:12:38.475 "raid_level": "concat", 00:12:38.475 "superblock": false, 00:12:38.475 "num_base_bdevs": 4, 00:12:38.475 "num_base_bdevs_discovered": 3, 00:12:38.475 "num_base_bdevs_operational": 4, 00:12:38.475 "base_bdevs_list": [ 00:12:38.475 { 00:12:38.475 "name": "BaseBdev1", 00:12:38.475 "uuid": "ea76128e-4db9-4c54-ae2c-af5922c03abb", 00:12:38.475 "is_configured": true, 00:12:38.475 "data_offset": 0, 00:12:38.475 "data_size": 65536 00:12:38.475 }, 00:12:38.475 { 00:12:38.475 "name": null, 00:12:38.475 "uuid": "d2852015-2530-4a78-8b17-e92103f8e9af", 00:12:38.475 "is_configured": false, 00:12:38.475 "data_offset": 0, 00:12:38.475 "data_size": 65536 00:12:38.475 }, 00:12:38.475 { 00:12:38.476 "name": "BaseBdev3", 00:12:38.476 "uuid": "fc9fd5f9-5ba0-47ec-9237-4e445f8f56c9", 00:12:38.476 "is_configured": true, 00:12:38.476 "data_offset": 0, 00:12:38.476 "data_size": 65536 00:12:38.476 }, 00:12:38.476 { 00:12:38.476 "name": "BaseBdev4", 00:12:38.476 "uuid": "c039a5e5-ba55-454d-819a-228e2d04d36e", 00:12:38.476 "is_configured": true, 00:12:38.476 "data_offset": 0, 00:12:38.476 "data_size": 65536 00:12:38.476 } 00:12:38.476 ] 00:12:38.476 }' 00:12:38.476 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.476 14:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.734 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:38.734 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.734 14:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.734 14:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.734 14:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.734 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:38.734 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:38.734 14:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.734 14:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.003 [2024-11-20 14:23:17.715366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:39.003 14:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.003 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:39.003 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.003 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.003 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:39.003 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.003 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.003 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.003 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.003 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.003 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.003 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.003 14:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.003 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.003 14:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.003 14:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.003 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.003 "name": "Existed_Raid", 00:12:39.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.003 "strip_size_kb": 64, 00:12:39.003 "state": "configuring", 00:12:39.003 "raid_level": "concat", 00:12:39.003 "superblock": false, 00:12:39.003 "num_base_bdevs": 4, 00:12:39.003 "num_base_bdevs_discovered": 2, 00:12:39.003 "num_base_bdevs_operational": 4, 00:12:39.003 "base_bdevs_list": [ 00:12:39.003 { 00:12:39.003 "name": null, 00:12:39.003 "uuid": "ea76128e-4db9-4c54-ae2c-af5922c03abb", 00:12:39.003 "is_configured": false, 00:12:39.003 "data_offset": 0, 00:12:39.003 "data_size": 65536 00:12:39.003 }, 00:12:39.003 { 00:12:39.003 "name": null, 00:12:39.003 "uuid": "d2852015-2530-4a78-8b17-e92103f8e9af", 00:12:39.003 "is_configured": false, 00:12:39.003 "data_offset": 0, 00:12:39.003 "data_size": 65536 00:12:39.003 }, 00:12:39.003 { 00:12:39.003 "name": "BaseBdev3", 00:12:39.003 "uuid": "fc9fd5f9-5ba0-47ec-9237-4e445f8f56c9", 00:12:39.003 "is_configured": true, 00:12:39.003 "data_offset": 0, 00:12:39.003 "data_size": 65536 00:12:39.003 }, 00:12:39.003 { 00:12:39.003 "name": "BaseBdev4", 00:12:39.003 "uuid": "c039a5e5-ba55-454d-819a-228e2d04d36e", 00:12:39.003 "is_configured": true, 00:12:39.003 "data_offset": 0, 00:12:39.003 "data_size": 65536 00:12:39.003 } 00:12:39.003 ] 00:12:39.003 }' 00:12:39.003 14:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.003 14:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.571 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.571 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.571 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:39.571 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.571 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.571 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:39.571 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:39.571 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.571 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.571 [2024-11-20 14:23:18.320960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:39.571 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.571 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:39.571 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.572 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.572 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:39.572 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.572 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.572 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.572 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.572 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.572 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.572 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.572 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.572 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.572 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.572 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.572 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.572 "name": "Existed_Raid", 00:12:39.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.572 "strip_size_kb": 64, 00:12:39.572 "state": "configuring", 00:12:39.572 "raid_level": "concat", 00:12:39.572 "superblock": false, 00:12:39.572 "num_base_bdevs": 4, 00:12:39.572 "num_base_bdevs_discovered": 3, 00:12:39.572 "num_base_bdevs_operational": 4, 00:12:39.572 "base_bdevs_list": [ 00:12:39.572 { 00:12:39.572 "name": null, 00:12:39.572 "uuid": "ea76128e-4db9-4c54-ae2c-af5922c03abb", 00:12:39.572 "is_configured": false, 00:12:39.572 "data_offset": 0, 00:12:39.572 "data_size": 65536 00:12:39.572 }, 00:12:39.572 { 00:12:39.572 "name": "BaseBdev2", 00:12:39.572 "uuid": "d2852015-2530-4a78-8b17-e92103f8e9af", 00:12:39.572 "is_configured": true, 00:12:39.572 "data_offset": 0, 00:12:39.572 "data_size": 65536 00:12:39.572 }, 00:12:39.572 { 00:12:39.572 "name": "BaseBdev3", 00:12:39.572 "uuid": "fc9fd5f9-5ba0-47ec-9237-4e445f8f56c9", 00:12:39.572 "is_configured": true, 00:12:39.572 "data_offset": 0, 00:12:39.572 "data_size": 65536 00:12:39.572 }, 00:12:39.572 { 00:12:39.572 "name": "BaseBdev4", 00:12:39.572 "uuid": "c039a5e5-ba55-454d-819a-228e2d04d36e", 00:12:39.572 "is_configured": true, 00:12:39.572 "data_offset": 0, 00:12:39.572 "data_size": 65536 00:12:39.572 } 00:12:39.572 ] 00:12:39.572 }' 00:12:39.572 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.572 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.139 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.139 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.139 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.139 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:40.139 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.139 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:40.139 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:40.139 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.139 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.139 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.139 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.139 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ea76128e-4db9-4c54-ae2c-af5922c03abb 00:12:40.139 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.139 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.139 [2024-11-20 14:23:18.982411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:40.139 [2024-11-20 14:23:18.982473] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:40.139 [2024-11-20 14:23:18.982486] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:40.139 [2024-11-20 14:23:18.982821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:40.139 [2024-11-20 14:23:18.983023] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:40.139 [2024-11-20 14:23:18.983044] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:40.139 [2024-11-20 14:23:18.983355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.139 NewBaseBdev 00:12:40.139 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.139 14:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:40.139 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:40.140 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:40.140 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:40.140 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:40.140 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:40.140 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:40.140 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.140 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.140 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.140 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:40.140 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.140 14:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.140 [ 00:12:40.140 { 00:12:40.140 "name": "NewBaseBdev", 00:12:40.140 "aliases": [ 00:12:40.140 "ea76128e-4db9-4c54-ae2c-af5922c03abb" 00:12:40.140 ], 00:12:40.140 "product_name": "Malloc disk", 00:12:40.140 "block_size": 512, 00:12:40.140 "num_blocks": 65536, 00:12:40.140 "uuid": "ea76128e-4db9-4c54-ae2c-af5922c03abb", 00:12:40.140 "assigned_rate_limits": { 00:12:40.140 "rw_ios_per_sec": 0, 00:12:40.140 "rw_mbytes_per_sec": 0, 00:12:40.140 "r_mbytes_per_sec": 0, 00:12:40.140 "w_mbytes_per_sec": 0 00:12:40.140 }, 00:12:40.140 "claimed": true, 00:12:40.140 "claim_type": "exclusive_write", 00:12:40.140 "zoned": false, 00:12:40.140 "supported_io_types": { 00:12:40.140 "read": true, 00:12:40.140 "write": true, 00:12:40.140 "unmap": true, 00:12:40.140 "flush": true, 00:12:40.140 "reset": true, 00:12:40.140 "nvme_admin": false, 00:12:40.140 "nvme_io": false, 00:12:40.140 "nvme_io_md": false, 00:12:40.140 "write_zeroes": true, 00:12:40.140 "zcopy": true, 00:12:40.140 "get_zone_info": false, 00:12:40.140 "zone_management": false, 00:12:40.140 "zone_append": false, 00:12:40.140 "compare": false, 00:12:40.140 "compare_and_write": false, 00:12:40.140 "abort": true, 00:12:40.140 "seek_hole": false, 00:12:40.140 "seek_data": false, 00:12:40.140 "copy": true, 00:12:40.140 "nvme_iov_md": false 00:12:40.140 }, 00:12:40.140 "memory_domains": [ 00:12:40.140 { 00:12:40.140 "dma_device_id": "system", 00:12:40.140 "dma_device_type": 1 00:12:40.140 }, 00:12:40.140 { 00:12:40.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.140 "dma_device_type": 2 00:12:40.140 } 00:12:40.140 ], 00:12:40.140 "driver_specific": {} 00:12:40.140 } 00:12:40.140 ] 00:12:40.140 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.140 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:40.140 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:40.140 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.140 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.140 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:40.140 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.140 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.140 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.140 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.140 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.140 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.140 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.140 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.140 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.140 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.140 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.140 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.140 "name": "Existed_Raid", 00:12:40.140 "uuid": "75ac2f40-6083-4dff-8f79-24edf1eba326", 00:12:40.140 "strip_size_kb": 64, 00:12:40.140 "state": "online", 00:12:40.140 "raid_level": "concat", 00:12:40.140 "superblock": false, 00:12:40.140 "num_base_bdevs": 4, 00:12:40.140 "num_base_bdevs_discovered": 4, 00:12:40.140 "num_base_bdevs_operational": 4, 00:12:40.140 "base_bdevs_list": [ 00:12:40.140 { 00:12:40.140 "name": "NewBaseBdev", 00:12:40.140 "uuid": "ea76128e-4db9-4c54-ae2c-af5922c03abb", 00:12:40.140 "is_configured": true, 00:12:40.140 "data_offset": 0, 00:12:40.140 "data_size": 65536 00:12:40.140 }, 00:12:40.140 { 00:12:40.140 "name": "BaseBdev2", 00:12:40.140 "uuid": "d2852015-2530-4a78-8b17-e92103f8e9af", 00:12:40.140 "is_configured": true, 00:12:40.140 "data_offset": 0, 00:12:40.140 "data_size": 65536 00:12:40.140 }, 00:12:40.140 { 00:12:40.140 "name": "BaseBdev3", 00:12:40.140 "uuid": "fc9fd5f9-5ba0-47ec-9237-4e445f8f56c9", 00:12:40.140 "is_configured": true, 00:12:40.140 "data_offset": 0, 00:12:40.140 "data_size": 65536 00:12:40.140 }, 00:12:40.140 { 00:12:40.140 "name": "BaseBdev4", 00:12:40.140 "uuid": "c039a5e5-ba55-454d-819a-228e2d04d36e", 00:12:40.140 "is_configured": true, 00:12:40.140 "data_offset": 0, 00:12:40.140 "data_size": 65536 00:12:40.140 } 00:12:40.140 ] 00:12:40.140 }' 00:12:40.140 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.140 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.706 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:40.706 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:40.706 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:40.706 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:40.706 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:40.706 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:40.706 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:40.706 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.706 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:40.706 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.706 [2024-11-20 14:23:19.531068] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:40.706 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.706 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:40.706 "name": "Existed_Raid", 00:12:40.706 "aliases": [ 00:12:40.706 "75ac2f40-6083-4dff-8f79-24edf1eba326" 00:12:40.706 ], 00:12:40.706 "product_name": "Raid Volume", 00:12:40.706 "block_size": 512, 00:12:40.706 "num_blocks": 262144, 00:12:40.706 "uuid": "75ac2f40-6083-4dff-8f79-24edf1eba326", 00:12:40.706 "assigned_rate_limits": { 00:12:40.706 "rw_ios_per_sec": 0, 00:12:40.706 "rw_mbytes_per_sec": 0, 00:12:40.706 "r_mbytes_per_sec": 0, 00:12:40.706 "w_mbytes_per_sec": 0 00:12:40.706 }, 00:12:40.706 "claimed": false, 00:12:40.706 "zoned": false, 00:12:40.706 "supported_io_types": { 00:12:40.706 "read": true, 00:12:40.706 "write": true, 00:12:40.706 "unmap": true, 00:12:40.706 "flush": true, 00:12:40.706 "reset": true, 00:12:40.706 "nvme_admin": false, 00:12:40.706 "nvme_io": false, 00:12:40.706 "nvme_io_md": false, 00:12:40.706 "write_zeroes": true, 00:12:40.706 "zcopy": false, 00:12:40.706 "get_zone_info": false, 00:12:40.706 "zone_management": false, 00:12:40.706 "zone_append": false, 00:12:40.706 "compare": false, 00:12:40.706 "compare_and_write": false, 00:12:40.706 "abort": false, 00:12:40.706 "seek_hole": false, 00:12:40.706 "seek_data": false, 00:12:40.706 "copy": false, 00:12:40.706 "nvme_iov_md": false 00:12:40.706 }, 00:12:40.706 "memory_domains": [ 00:12:40.706 { 00:12:40.706 "dma_device_id": "system", 00:12:40.706 "dma_device_type": 1 00:12:40.706 }, 00:12:40.706 { 00:12:40.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.706 "dma_device_type": 2 00:12:40.706 }, 00:12:40.706 { 00:12:40.707 "dma_device_id": "system", 00:12:40.707 "dma_device_type": 1 00:12:40.707 }, 00:12:40.707 { 00:12:40.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.707 "dma_device_type": 2 00:12:40.707 }, 00:12:40.707 { 00:12:40.707 "dma_device_id": "system", 00:12:40.707 "dma_device_type": 1 00:12:40.707 }, 00:12:40.707 { 00:12:40.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.707 "dma_device_type": 2 00:12:40.707 }, 00:12:40.707 { 00:12:40.707 "dma_device_id": "system", 00:12:40.707 "dma_device_type": 1 00:12:40.707 }, 00:12:40.707 { 00:12:40.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.707 "dma_device_type": 2 00:12:40.707 } 00:12:40.707 ], 00:12:40.707 "driver_specific": { 00:12:40.707 "raid": { 00:12:40.707 "uuid": "75ac2f40-6083-4dff-8f79-24edf1eba326", 00:12:40.707 "strip_size_kb": 64, 00:12:40.707 "state": "online", 00:12:40.707 "raid_level": "concat", 00:12:40.707 "superblock": false, 00:12:40.707 "num_base_bdevs": 4, 00:12:40.707 "num_base_bdevs_discovered": 4, 00:12:40.707 "num_base_bdevs_operational": 4, 00:12:40.707 "base_bdevs_list": [ 00:12:40.707 { 00:12:40.707 "name": "NewBaseBdev", 00:12:40.707 "uuid": "ea76128e-4db9-4c54-ae2c-af5922c03abb", 00:12:40.707 "is_configured": true, 00:12:40.707 "data_offset": 0, 00:12:40.707 "data_size": 65536 00:12:40.707 }, 00:12:40.707 { 00:12:40.707 "name": "BaseBdev2", 00:12:40.707 "uuid": "d2852015-2530-4a78-8b17-e92103f8e9af", 00:12:40.707 "is_configured": true, 00:12:40.707 "data_offset": 0, 00:12:40.707 "data_size": 65536 00:12:40.707 }, 00:12:40.707 { 00:12:40.707 "name": "BaseBdev3", 00:12:40.707 "uuid": "fc9fd5f9-5ba0-47ec-9237-4e445f8f56c9", 00:12:40.707 "is_configured": true, 00:12:40.707 "data_offset": 0, 00:12:40.707 "data_size": 65536 00:12:40.707 }, 00:12:40.707 { 00:12:40.707 "name": "BaseBdev4", 00:12:40.707 "uuid": "c039a5e5-ba55-454d-819a-228e2d04d36e", 00:12:40.707 "is_configured": true, 00:12:40.707 "data_offset": 0, 00:12:40.707 "data_size": 65536 00:12:40.707 } 00:12:40.707 ] 00:12:40.707 } 00:12:40.707 } 00:12:40.707 }' 00:12:40.707 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:40.707 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:40.707 BaseBdev2 00:12:40.707 BaseBdev3 00:12:40.707 BaseBdev4' 00:12:40.707 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.707 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:40.707 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.707 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:40.707 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.707 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.707 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.967 [2024-11-20 14:23:19.894703] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:40.967 [2024-11-20 14:23:19.894740] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:40.967 [2024-11-20 14:23:19.894828] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:40.967 [2024-11-20 14:23:19.894931] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:40.967 [2024-11-20 14:23:19.894949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71400 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71400 ']' 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71400 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71400 00:12:40.967 killing process with pid 71400 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71400' 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71400 00:12:40.967 [2024-11-20 14:23:19.933059] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:40.967 14:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71400 00:12:41.536 [2024-11-20 14:23:20.279651] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:42.471 14:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:42.471 00:12:42.471 real 0m12.282s 00:12:42.471 user 0m20.342s 00:12:42.471 sys 0m1.602s 00:12:42.471 ************************************ 00:12:42.471 END TEST raid_state_function_test 00:12:42.471 ************************************ 00:12:42.471 14:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.471 14:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.471 14:23:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:12:42.471 14:23:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:42.471 14:23:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.471 14:23:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:42.471 ************************************ 00:12:42.471 START TEST raid_state_function_test_sb 00:12:42.471 ************************************ 00:12:42.471 14:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:12:42.471 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:42.471 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:42.471 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:42.471 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:42.471 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:42.471 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.471 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72078 00:12:42.472 Process raid pid: 72078 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72078' 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72078 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72078 ']' 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.472 14:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.763 [2024-11-20 14:23:21.487445] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:12:42.763 [2024-11-20 14:23:21.487625] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.763 [2024-11-20 14:23:21.663612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.054 [2024-11-20 14:23:21.795361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.054 [2024-11-20 14:23:22.003497] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.054 [2024-11-20 14:23:22.003555] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.622 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:43.622 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:43.622 14:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:43.622 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.622 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.622 [2024-11-20 14:23:22.417924] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:43.622 [2024-11-20 14:23:22.418018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:43.622 [2024-11-20 14:23:22.418038] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:43.622 [2024-11-20 14:23:22.418057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:43.622 [2024-11-20 14:23:22.418068] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:43.622 [2024-11-20 14:23:22.418090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:43.622 [2024-11-20 14:23:22.418110] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:43.622 [2024-11-20 14:23:22.418130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:43.622 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.622 14:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:43.622 14:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.622 14:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.622 14:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:43.622 14:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.622 14:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.622 14:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.622 14:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.622 14:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.622 14:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.622 14:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.622 14:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.622 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.622 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.622 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.622 14:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.622 "name": "Existed_Raid", 00:12:43.622 "uuid": "1f7f557c-ecf9-4d9d-8393-0f7fabccd542", 00:12:43.622 "strip_size_kb": 64, 00:12:43.622 "state": "configuring", 00:12:43.622 "raid_level": "concat", 00:12:43.622 "superblock": true, 00:12:43.622 "num_base_bdevs": 4, 00:12:43.622 "num_base_bdevs_discovered": 0, 00:12:43.622 "num_base_bdevs_operational": 4, 00:12:43.622 "base_bdevs_list": [ 00:12:43.622 { 00:12:43.622 "name": "BaseBdev1", 00:12:43.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.622 "is_configured": false, 00:12:43.622 "data_offset": 0, 00:12:43.622 "data_size": 0 00:12:43.622 }, 00:12:43.622 { 00:12:43.622 "name": "BaseBdev2", 00:12:43.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.622 "is_configured": false, 00:12:43.622 "data_offset": 0, 00:12:43.622 "data_size": 0 00:12:43.622 }, 00:12:43.622 { 00:12:43.622 "name": "BaseBdev3", 00:12:43.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.622 "is_configured": false, 00:12:43.622 "data_offset": 0, 00:12:43.622 "data_size": 0 00:12:43.622 }, 00:12:43.622 { 00:12:43.622 "name": "BaseBdev4", 00:12:43.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.622 "is_configured": false, 00:12:43.622 "data_offset": 0, 00:12:43.622 "data_size": 0 00:12:43.622 } 00:12:43.622 ] 00:12:43.622 }' 00:12:43.622 14:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.622 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.190 [2024-11-20 14:23:22.926000] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:44.190 [2024-11-20 14:23:22.926053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.190 [2024-11-20 14:23:22.934009] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:44.190 [2024-11-20 14:23:22.934060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:44.190 [2024-11-20 14:23:22.934076] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:44.190 [2024-11-20 14:23:22.934093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:44.190 [2024-11-20 14:23:22.934103] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:44.190 [2024-11-20 14:23:22.934118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:44.190 [2024-11-20 14:23:22.934135] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:44.190 [2024-11-20 14:23:22.934162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.190 [2024-11-20 14:23:22.983980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.190 BaseBdev1 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.190 14:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.190 [ 00:12:44.190 { 00:12:44.190 "name": "BaseBdev1", 00:12:44.190 "aliases": [ 00:12:44.190 "a1cb4d5b-172f-4bc3-ae50-48a47f6cccf2" 00:12:44.190 ], 00:12:44.190 "product_name": "Malloc disk", 00:12:44.190 "block_size": 512, 00:12:44.190 "num_blocks": 65536, 00:12:44.190 "uuid": "a1cb4d5b-172f-4bc3-ae50-48a47f6cccf2", 00:12:44.190 "assigned_rate_limits": { 00:12:44.190 "rw_ios_per_sec": 0, 00:12:44.190 "rw_mbytes_per_sec": 0, 00:12:44.190 "r_mbytes_per_sec": 0, 00:12:44.190 "w_mbytes_per_sec": 0 00:12:44.190 }, 00:12:44.190 "claimed": true, 00:12:44.190 "claim_type": "exclusive_write", 00:12:44.190 "zoned": false, 00:12:44.190 "supported_io_types": { 00:12:44.190 "read": true, 00:12:44.190 "write": true, 00:12:44.190 "unmap": true, 00:12:44.190 "flush": true, 00:12:44.190 "reset": true, 00:12:44.190 "nvme_admin": false, 00:12:44.190 "nvme_io": false, 00:12:44.190 "nvme_io_md": false, 00:12:44.190 "write_zeroes": true, 00:12:44.190 "zcopy": true, 00:12:44.190 "get_zone_info": false, 00:12:44.190 "zone_management": false, 00:12:44.190 "zone_append": false, 00:12:44.191 "compare": false, 00:12:44.191 "compare_and_write": false, 00:12:44.191 "abort": true, 00:12:44.191 "seek_hole": false, 00:12:44.191 "seek_data": false, 00:12:44.191 "copy": true, 00:12:44.191 "nvme_iov_md": false 00:12:44.191 }, 00:12:44.191 "memory_domains": [ 00:12:44.191 { 00:12:44.191 "dma_device_id": "system", 00:12:44.191 "dma_device_type": 1 00:12:44.191 }, 00:12:44.191 { 00:12:44.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.191 "dma_device_type": 2 00:12:44.191 } 00:12:44.191 ], 00:12:44.191 "driver_specific": {} 00:12:44.191 } 00:12:44.191 ] 00:12:44.191 14:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.191 14:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:44.191 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:44.191 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.191 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.191 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:44.191 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.191 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.191 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.191 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.191 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.191 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.191 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.191 14:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.191 14:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.191 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.191 14:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.191 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.191 "name": "Existed_Raid", 00:12:44.191 "uuid": "c6de6637-82c4-436e-ab31-56b80f1e8958", 00:12:44.191 "strip_size_kb": 64, 00:12:44.191 "state": "configuring", 00:12:44.191 "raid_level": "concat", 00:12:44.191 "superblock": true, 00:12:44.191 "num_base_bdevs": 4, 00:12:44.191 "num_base_bdevs_discovered": 1, 00:12:44.191 "num_base_bdevs_operational": 4, 00:12:44.191 "base_bdevs_list": [ 00:12:44.191 { 00:12:44.191 "name": "BaseBdev1", 00:12:44.191 "uuid": "a1cb4d5b-172f-4bc3-ae50-48a47f6cccf2", 00:12:44.191 "is_configured": true, 00:12:44.191 "data_offset": 2048, 00:12:44.191 "data_size": 63488 00:12:44.191 }, 00:12:44.191 { 00:12:44.191 "name": "BaseBdev2", 00:12:44.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.191 "is_configured": false, 00:12:44.191 "data_offset": 0, 00:12:44.191 "data_size": 0 00:12:44.191 }, 00:12:44.191 { 00:12:44.191 "name": "BaseBdev3", 00:12:44.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.191 "is_configured": false, 00:12:44.191 "data_offset": 0, 00:12:44.191 "data_size": 0 00:12:44.191 }, 00:12:44.191 { 00:12:44.191 "name": "BaseBdev4", 00:12:44.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.191 "is_configured": false, 00:12:44.191 "data_offset": 0, 00:12:44.191 "data_size": 0 00:12:44.191 } 00:12:44.191 ] 00:12:44.191 }' 00:12:44.191 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.191 14:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.758 [2024-11-20 14:23:23.528201] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:44.758 [2024-11-20 14:23:23.528276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.758 [2024-11-20 14:23:23.536269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.758 [2024-11-20 14:23:23.538702] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:44.758 [2024-11-20 14:23:23.538768] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:44.758 [2024-11-20 14:23:23.538788] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:44.758 [2024-11-20 14:23:23.538810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:44.758 [2024-11-20 14:23:23.538823] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:44.758 [2024-11-20 14:23:23.538841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.758 "name": "Existed_Raid", 00:12:44.758 "uuid": "39a07a77-a7af-4e84-b90c-cbdfae540f01", 00:12:44.758 "strip_size_kb": 64, 00:12:44.758 "state": "configuring", 00:12:44.758 "raid_level": "concat", 00:12:44.758 "superblock": true, 00:12:44.758 "num_base_bdevs": 4, 00:12:44.758 "num_base_bdevs_discovered": 1, 00:12:44.758 "num_base_bdevs_operational": 4, 00:12:44.758 "base_bdevs_list": [ 00:12:44.758 { 00:12:44.758 "name": "BaseBdev1", 00:12:44.758 "uuid": "a1cb4d5b-172f-4bc3-ae50-48a47f6cccf2", 00:12:44.758 "is_configured": true, 00:12:44.758 "data_offset": 2048, 00:12:44.758 "data_size": 63488 00:12:44.758 }, 00:12:44.758 { 00:12:44.758 "name": "BaseBdev2", 00:12:44.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.758 "is_configured": false, 00:12:44.758 "data_offset": 0, 00:12:44.758 "data_size": 0 00:12:44.758 }, 00:12:44.758 { 00:12:44.758 "name": "BaseBdev3", 00:12:44.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.758 "is_configured": false, 00:12:44.758 "data_offset": 0, 00:12:44.758 "data_size": 0 00:12:44.758 }, 00:12:44.758 { 00:12:44.758 "name": "BaseBdev4", 00:12:44.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.758 "is_configured": false, 00:12:44.758 "data_offset": 0, 00:12:44.758 "data_size": 0 00:12:44.758 } 00:12:44.758 ] 00:12:44.758 }' 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.758 14:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.326 [2024-11-20 14:23:24.082590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:45.326 BaseBdev2 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.326 [ 00:12:45.326 { 00:12:45.326 "name": "BaseBdev2", 00:12:45.326 "aliases": [ 00:12:45.326 "5f5981c8-b151-400b-98a4-2aa94ab5b67b" 00:12:45.326 ], 00:12:45.326 "product_name": "Malloc disk", 00:12:45.326 "block_size": 512, 00:12:45.326 "num_blocks": 65536, 00:12:45.326 "uuid": "5f5981c8-b151-400b-98a4-2aa94ab5b67b", 00:12:45.326 "assigned_rate_limits": { 00:12:45.326 "rw_ios_per_sec": 0, 00:12:45.326 "rw_mbytes_per_sec": 0, 00:12:45.326 "r_mbytes_per_sec": 0, 00:12:45.326 "w_mbytes_per_sec": 0 00:12:45.326 }, 00:12:45.326 "claimed": true, 00:12:45.326 "claim_type": "exclusive_write", 00:12:45.326 "zoned": false, 00:12:45.326 "supported_io_types": { 00:12:45.326 "read": true, 00:12:45.326 "write": true, 00:12:45.326 "unmap": true, 00:12:45.326 "flush": true, 00:12:45.326 "reset": true, 00:12:45.326 "nvme_admin": false, 00:12:45.326 "nvme_io": false, 00:12:45.326 "nvme_io_md": false, 00:12:45.326 "write_zeroes": true, 00:12:45.326 "zcopy": true, 00:12:45.326 "get_zone_info": false, 00:12:45.326 "zone_management": false, 00:12:45.326 "zone_append": false, 00:12:45.326 "compare": false, 00:12:45.326 "compare_and_write": false, 00:12:45.326 "abort": true, 00:12:45.326 "seek_hole": false, 00:12:45.326 "seek_data": false, 00:12:45.326 "copy": true, 00:12:45.326 "nvme_iov_md": false 00:12:45.326 }, 00:12:45.326 "memory_domains": [ 00:12:45.326 { 00:12:45.326 "dma_device_id": "system", 00:12:45.326 "dma_device_type": 1 00:12:45.326 }, 00:12:45.326 { 00:12:45.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.326 "dma_device_type": 2 00:12:45.326 } 00:12:45.326 ], 00:12:45.326 "driver_specific": {} 00:12:45.326 } 00:12:45.326 ] 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.326 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.327 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.327 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.327 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.327 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.327 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.327 "name": "Existed_Raid", 00:12:45.327 "uuid": "39a07a77-a7af-4e84-b90c-cbdfae540f01", 00:12:45.327 "strip_size_kb": 64, 00:12:45.327 "state": "configuring", 00:12:45.327 "raid_level": "concat", 00:12:45.327 "superblock": true, 00:12:45.327 "num_base_bdevs": 4, 00:12:45.327 "num_base_bdevs_discovered": 2, 00:12:45.327 "num_base_bdevs_operational": 4, 00:12:45.327 "base_bdevs_list": [ 00:12:45.327 { 00:12:45.327 "name": "BaseBdev1", 00:12:45.327 "uuid": "a1cb4d5b-172f-4bc3-ae50-48a47f6cccf2", 00:12:45.327 "is_configured": true, 00:12:45.327 "data_offset": 2048, 00:12:45.327 "data_size": 63488 00:12:45.327 }, 00:12:45.327 { 00:12:45.327 "name": "BaseBdev2", 00:12:45.327 "uuid": "5f5981c8-b151-400b-98a4-2aa94ab5b67b", 00:12:45.327 "is_configured": true, 00:12:45.327 "data_offset": 2048, 00:12:45.327 "data_size": 63488 00:12:45.327 }, 00:12:45.327 { 00:12:45.327 "name": "BaseBdev3", 00:12:45.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.327 "is_configured": false, 00:12:45.327 "data_offset": 0, 00:12:45.327 "data_size": 0 00:12:45.327 }, 00:12:45.327 { 00:12:45.327 "name": "BaseBdev4", 00:12:45.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.327 "is_configured": false, 00:12:45.327 "data_offset": 0, 00:12:45.327 "data_size": 0 00:12:45.327 } 00:12:45.327 ] 00:12:45.327 }' 00:12:45.327 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.327 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.894 [2024-11-20 14:23:24.650535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:45.894 BaseBdev3 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.894 [ 00:12:45.894 { 00:12:45.894 "name": "BaseBdev3", 00:12:45.894 "aliases": [ 00:12:45.894 "8c528c73-9543-4a01-8f6b-47219a9a4176" 00:12:45.894 ], 00:12:45.894 "product_name": "Malloc disk", 00:12:45.894 "block_size": 512, 00:12:45.894 "num_blocks": 65536, 00:12:45.894 "uuid": "8c528c73-9543-4a01-8f6b-47219a9a4176", 00:12:45.894 "assigned_rate_limits": { 00:12:45.894 "rw_ios_per_sec": 0, 00:12:45.894 "rw_mbytes_per_sec": 0, 00:12:45.894 "r_mbytes_per_sec": 0, 00:12:45.894 "w_mbytes_per_sec": 0 00:12:45.894 }, 00:12:45.894 "claimed": true, 00:12:45.894 "claim_type": "exclusive_write", 00:12:45.894 "zoned": false, 00:12:45.894 "supported_io_types": { 00:12:45.894 "read": true, 00:12:45.894 "write": true, 00:12:45.894 "unmap": true, 00:12:45.894 "flush": true, 00:12:45.894 "reset": true, 00:12:45.894 "nvme_admin": false, 00:12:45.894 "nvme_io": false, 00:12:45.894 "nvme_io_md": false, 00:12:45.894 "write_zeroes": true, 00:12:45.894 "zcopy": true, 00:12:45.894 "get_zone_info": false, 00:12:45.894 "zone_management": false, 00:12:45.894 "zone_append": false, 00:12:45.894 "compare": false, 00:12:45.894 "compare_and_write": false, 00:12:45.894 "abort": true, 00:12:45.894 "seek_hole": false, 00:12:45.894 "seek_data": false, 00:12:45.894 "copy": true, 00:12:45.894 "nvme_iov_md": false 00:12:45.894 }, 00:12:45.894 "memory_domains": [ 00:12:45.894 { 00:12:45.894 "dma_device_id": "system", 00:12:45.894 "dma_device_type": 1 00:12:45.894 }, 00:12:45.894 { 00:12:45.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.894 "dma_device_type": 2 00:12:45.894 } 00:12:45.894 ], 00:12:45.894 "driver_specific": {} 00:12:45.894 } 00:12:45.894 ] 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.894 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.894 "name": "Existed_Raid", 00:12:45.894 "uuid": "39a07a77-a7af-4e84-b90c-cbdfae540f01", 00:12:45.894 "strip_size_kb": 64, 00:12:45.894 "state": "configuring", 00:12:45.894 "raid_level": "concat", 00:12:45.894 "superblock": true, 00:12:45.894 "num_base_bdevs": 4, 00:12:45.894 "num_base_bdevs_discovered": 3, 00:12:45.894 "num_base_bdevs_operational": 4, 00:12:45.894 "base_bdevs_list": [ 00:12:45.894 { 00:12:45.894 "name": "BaseBdev1", 00:12:45.894 "uuid": "a1cb4d5b-172f-4bc3-ae50-48a47f6cccf2", 00:12:45.894 "is_configured": true, 00:12:45.894 "data_offset": 2048, 00:12:45.894 "data_size": 63488 00:12:45.894 }, 00:12:45.894 { 00:12:45.894 "name": "BaseBdev2", 00:12:45.894 "uuid": "5f5981c8-b151-400b-98a4-2aa94ab5b67b", 00:12:45.894 "is_configured": true, 00:12:45.894 "data_offset": 2048, 00:12:45.894 "data_size": 63488 00:12:45.894 }, 00:12:45.894 { 00:12:45.894 "name": "BaseBdev3", 00:12:45.894 "uuid": "8c528c73-9543-4a01-8f6b-47219a9a4176", 00:12:45.894 "is_configured": true, 00:12:45.894 "data_offset": 2048, 00:12:45.894 "data_size": 63488 00:12:45.894 }, 00:12:45.895 { 00:12:45.895 "name": "BaseBdev4", 00:12:45.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.895 "is_configured": false, 00:12:45.895 "data_offset": 0, 00:12:45.895 "data_size": 0 00:12:45.895 } 00:12:45.895 ] 00:12:45.895 }' 00:12:45.895 14:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.895 14:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.461 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:46.461 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.461 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.461 [2024-11-20 14:23:25.233533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:46.461 [2024-11-20 14:23:25.233929] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:46.461 [2024-11-20 14:23:25.233952] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:46.461 BaseBdev4 00:12:46.461 [2024-11-20 14:23:25.234333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:46.461 [2024-11-20 14:23:25.234547] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:46.461 [2024-11-20 14:23:25.234572] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:46.461 [2024-11-20 14:23:25.234768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.461 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.461 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:46.461 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:46.461 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.462 [ 00:12:46.462 { 00:12:46.462 "name": "BaseBdev4", 00:12:46.462 "aliases": [ 00:12:46.462 "dd4f7c59-bc2c-4847-80e4-7e7e633f6dc8" 00:12:46.462 ], 00:12:46.462 "product_name": "Malloc disk", 00:12:46.462 "block_size": 512, 00:12:46.462 "num_blocks": 65536, 00:12:46.462 "uuid": "dd4f7c59-bc2c-4847-80e4-7e7e633f6dc8", 00:12:46.462 "assigned_rate_limits": { 00:12:46.462 "rw_ios_per_sec": 0, 00:12:46.462 "rw_mbytes_per_sec": 0, 00:12:46.462 "r_mbytes_per_sec": 0, 00:12:46.462 "w_mbytes_per_sec": 0 00:12:46.462 }, 00:12:46.462 "claimed": true, 00:12:46.462 "claim_type": "exclusive_write", 00:12:46.462 "zoned": false, 00:12:46.462 "supported_io_types": { 00:12:46.462 "read": true, 00:12:46.462 "write": true, 00:12:46.462 "unmap": true, 00:12:46.462 "flush": true, 00:12:46.462 "reset": true, 00:12:46.462 "nvme_admin": false, 00:12:46.462 "nvme_io": false, 00:12:46.462 "nvme_io_md": false, 00:12:46.462 "write_zeroes": true, 00:12:46.462 "zcopy": true, 00:12:46.462 "get_zone_info": false, 00:12:46.462 "zone_management": false, 00:12:46.462 "zone_append": false, 00:12:46.462 "compare": false, 00:12:46.462 "compare_and_write": false, 00:12:46.462 "abort": true, 00:12:46.462 "seek_hole": false, 00:12:46.462 "seek_data": false, 00:12:46.462 "copy": true, 00:12:46.462 "nvme_iov_md": false 00:12:46.462 }, 00:12:46.462 "memory_domains": [ 00:12:46.462 { 00:12:46.462 "dma_device_id": "system", 00:12:46.462 "dma_device_type": 1 00:12:46.462 }, 00:12:46.462 { 00:12:46.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.462 "dma_device_type": 2 00:12:46.462 } 00:12:46.462 ], 00:12:46.462 "driver_specific": {} 00:12:46.462 } 00:12:46.462 ] 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.462 "name": "Existed_Raid", 00:12:46.462 "uuid": "39a07a77-a7af-4e84-b90c-cbdfae540f01", 00:12:46.462 "strip_size_kb": 64, 00:12:46.462 "state": "online", 00:12:46.462 "raid_level": "concat", 00:12:46.462 "superblock": true, 00:12:46.462 "num_base_bdevs": 4, 00:12:46.462 "num_base_bdevs_discovered": 4, 00:12:46.462 "num_base_bdevs_operational": 4, 00:12:46.462 "base_bdevs_list": [ 00:12:46.462 { 00:12:46.462 "name": "BaseBdev1", 00:12:46.462 "uuid": "a1cb4d5b-172f-4bc3-ae50-48a47f6cccf2", 00:12:46.462 "is_configured": true, 00:12:46.462 "data_offset": 2048, 00:12:46.462 "data_size": 63488 00:12:46.462 }, 00:12:46.462 { 00:12:46.462 "name": "BaseBdev2", 00:12:46.462 "uuid": "5f5981c8-b151-400b-98a4-2aa94ab5b67b", 00:12:46.462 "is_configured": true, 00:12:46.462 "data_offset": 2048, 00:12:46.462 "data_size": 63488 00:12:46.462 }, 00:12:46.462 { 00:12:46.462 "name": "BaseBdev3", 00:12:46.462 "uuid": "8c528c73-9543-4a01-8f6b-47219a9a4176", 00:12:46.462 "is_configured": true, 00:12:46.462 "data_offset": 2048, 00:12:46.462 "data_size": 63488 00:12:46.462 }, 00:12:46.462 { 00:12:46.462 "name": "BaseBdev4", 00:12:46.462 "uuid": "dd4f7c59-bc2c-4847-80e4-7e7e633f6dc8", 00:12:46.462 "is_configured": true, 00:12:46.462 "data_offset": 2048, 00:12:46.462 "data_size": 63488 00:12:46.462 } 00:12:46.462 ] 00:12:46.462 }' 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.462 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.037 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:47.037 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:47.037 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:47.037 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:47.037 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:47.037 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:47.037 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:47.037 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:47.037 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.037 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.037 [2024-11-20 14:23:25.826413] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:47.037 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.037 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:47.037 "name": "Existed_Raid", 00:12:47.037 "aliases": [ 00:12:47.037 "39a07a77-a7af-4e84-b90c-cbdfae540f01" 00:12:47.037 ], 00:12:47.037 "product_name": "Raid Volume", 00:12:47.037 "block_size": 512, 00:12:47.037 "num_blocks": 253952, 00:12:47.037 "uuid": "39a07a77-a7af-4e84-b90c-cbdfae540f01", 00:12:47.037 "assigned_rate_limits": { 00:12:47.037 "rw_ios_per_sec": 0, 00:12:47.037 "rw_mbytes_per_sec": 0, 00:12:47.037 "r_mbytes_per_sec": 0, 00:12:47.037 "w_mbytes_per_sec": 0 00:12:47.037 }, 00:12:47.037 "claimed": false, 00:12:47.037 "zoned": false, 00:12:47.037 "supported_io_types": { 00:12:47.037 "read": true, 00:12:47.037 "write": true, 00:12:47.037 "unmap": true, 00:12:47.037 "flush": true, 00:12:47.037 "reset": true, 00:12:47.037 "nvme_admin": false, 00:12:47.037 "nvme_io": false, 00:12:47.037 "nvme_io_md": false, 00:12:47.037 "write_zeroes": true, 00:12:47.037 "zcopy": false, 00:12:47.037 "get_zone_info": false, 00:12:47.037 "zone_management": false, 00:12:47.037 "zone_append": false, 00:12:47.037 "compare": false, 00:12:47.037 "compare_and_write": false, 00:12:47.037 "abort": false, 00:12:47.037 "seek_hole": false, 00:12:47.037 "seek_data": false, 00:12:47.037 "copy": false, 00:12:47.037 "nvme_iov_md": false 00:12:47.037 }, 00:12:47.037 "memory_domains": [ 00:12:47.037 { 00:12:47.037 "dma_device_id": "system", 00:12:47.037 "dma_device_type": 1 00:12:47.037 }, 00:12:47.037 { 00:12:47.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.037 "dma_device_type": 2 00:12:47.037 }, 00:12:47.037 { 00:12:47.037 "dma_device_id": "system", 00:12:47.037 "dma_device_type": 1 00:12:47.037 }, 00:12:47.037 { 00:12:47.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.037 "dma_device_type": 2 00:12:47.037 }, 00:12:47.037 { 00:12:47.037 "dma_device_id": "system", 00:12:47.037 "dma_device_type": 1 00:12:47.037 }, 00:12:47.037 { 00:12:47.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.037 "dma_device_type": 2 00:12:47.037 }, 00:12:47.037 { 00:12:47.037 "dma_device_id": "system", 00:12:47.037 "dma_device_type": 1 00:12:47.037 }, 00:12:47.037 { 00:12:47.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.037 "dma_device_type": 2 00:12:47.037 } 00:12:47.037 ], 00:12:47.037 "driver_specific": { 00:12:47.037 "raid": { 00:12:47.037 "uuid": "39a07a77-a7af-4e84-b90c-cbdfae540f01", 00:12:47.037 "strip_size_kb": 64, 00:12:47.037 "state": "online", 00:12:47.037 "raid_level": "concat", 00:12:47.037 "superblock": true, 00:12:47.037 "num_base_bdevs": 4, 00:12:47.037 "num_base_bdevs_discovered": 4, 00:12:47.037 "num_base_bdevs_operational": 4, 00:12:47.037 "base_bdevs_list": [ 00:12:47.037 { 00:12:47.037 "name": "BaseBdev1", 00:12:47.037 "uuid": "a1cb4d5b-172f-4bc3-ae50-48a47f6cccf2", 00:12:47.037 "is_configured": true, 00:12:47.037 "data_offset": 2048, 00:12:47.037 "data_size": 63488 00:12:47.037 }, 00:12:47.037 { 00:12:47.037 "name": "BaseBdev2", 00:12:47.037 "uuid": "5f5981c8-b151-400b-98a4-2aa94ab5b67b", 00:12:47.037 "is_configured": true, 00:12:47.037 "data_offset": 2048, 00:12:47.037 "data_size": 63488 00:12:47.037 }, 00:12:47.037 { 00:12:47.037 "name": "BaseBdev3", 00:12:47.037 "uuid": "8c528c73-9543-4a01-8f6b-47219a9a4176", 00:12:47.037 "is_configured": true, 00:12:47.037 "data_offset": 2048, 00:12:47.037 "data_size": 63488 00:12:47.037 }, 00:12:47.037 { 00:12:47.037 "name": "BaseBdev4", 00:12:47.037 "uuid": "dd4f7c59-bc2c-4847-80e4-7e7e633f6dc8", 00:12:47.037 "is_configured": true, 00:12:47.038 "data_offset": 2048, 00:12:47.038 "data_size": 63488 00:12:47.038 } 00:12:47.038 ] 00:12:47.038 } 00:12:47.038 } 00:12:47.038 }' 00:12:47.038 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:47.038 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:47.038 BaseBdev2 00:12:47.038 BaseBdev3 00:12:47.038 BaseBdev4' 00:12:47.038 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.038 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:47.038 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:47.038 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.038 14:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:47.038 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.038 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.038 14:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.324 [2024-11-20 14:23:26.201957] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:47.324 [2024-11-20 14:23:26.202003] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:47.324 [2024-11-20 14:23:26.202096] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.324 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.583 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.583 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.583 "name": "Existed_Raid", 00:12:47.583 "uuid": "39a07a77-a7af-4e84-b90c-cbdfae540f01", 00:12:47.583 "strip_size_kb": 64, 00:12:47.583 "state": "offline", 00:12:47.583 "raid_level": "concat", 00:12:47.583 "superblock": true, 00:12:47.583 "num_base_bdevs": 4, 00:12:47.583 "num_base_bdevs_discovered": 3, 00:12:47.583 "num_base_bdevs_operational": 3, 00:12:47.583 "base_bdevs_list": [ 00:12:47.583 { 00:12:47.583 "name": null, 00:12:47.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.583 "is_configured": false, 00:12:47.583 "data_offset": 0, 00:12:47.583 "data_size": 63488 00:12:47.583 }, 00:12:47.583 { 00:12:47.583 "name": "BaseBdev2", 00:12:47.583 "uuid": "5f5981c8-b151-400b-98a4-2aa94ab5b67b", 00:12:47.583 "is_configured": true, 00:12:47.583 "data_offset": 2048, 00:12:47.583 "data_size": 63488 00:12:47.583 }, 00:12:47.583 { 00:12:47.583 "name": "BaseBdev3", 00:12:47.583 "uuid": "8c528c73-9543-4a01-8f6b-47219a9a4176", 00:12:47.583 "is_configured": true, 00:12:47.583 "data_offset": 2048, 00:12:47.583 "data_size": 63488 00:12:47.583 }, 00:12:47.583 { 00:12:47.583 "name": "BaseBdev4", 00:12:47.583 "uuid": "dd4f7c59-bc2c-4847-80e4-7e7e633f6dc8", 00:12:47.583 "is_configured": true, 00:12:47.583 "data_offset": 2048, 00:12:47.583 "data_size": 63488 00:12:47.583 } 00:12:47.583 ] 00:12:47.583 }' 00:12:47.583 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.583 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.842 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:47.842 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:47.842 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.842 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:47.842 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.842 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.842 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.100 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:48.100 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:48.100 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:48.100 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.100 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.100 [2024-11-20 14:23:26.852814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:48.100 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.100 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:48.100 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:48.100 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:48.100 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.100 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.100 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.100 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.100 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:48.100 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:48.100 14:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:48.100 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.100 14:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.100 [2024-11-20 14:23:27.003255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.359 [2024-11-20 14:23:27.150466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:48.359 [2024-11-20 14:23:27.150534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.359 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.618 BaseBdev2 00:12:48.618 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.618 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:48.618 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:48.618 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:48.618 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:48.618 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:48.618 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:48.618 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:48.618 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.618 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.618 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.618 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:48.618 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.618 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.618 [ 00:12:48.618 { 00:12:48.618 "name": "BaseBdev2", 00:12:48.618 "aliases": [ 00:12:48.618 "6df3ea6c-bcfd-4e04-81ff-2bd738f9cca3" 00:12:48.618 ], 00:12:48.618 "product_name": "Malloc disk", 00:12:48.618 "block_size": 512, 00:12:48.618 "num_blocks": 65536, 00:12:48.618 "uuid": "6df3ea6c-bcfd-4e04-81ff-2bd738f9cca3", 00:12:48.618 "assigned_rate_limits": { 00:12:48.618 "rw_ios_per_sec": 0, 00:12:48.618 "rw_mbytes_per_sec": 0, 00:12:48.618 "r_mbytes_per_sec": 0, 00:12:48.618 "w_mbytes_per_sec": 0 00:12:48.618 }, 00:12:48.618 "claimed": false, 00:12:48.618 "zoned": false, 00:12:48.618 "supported_io_types": { 00:12:48.618 "read": true, 00:12:48.618 "write": true, 00:12:48.618 "unmap": true, 00:12:48.618 "flush": true, 00:12:48.618 "reset": true, 00:12:48.619 "nvme_admin": false, 00:12:48.619 "nvme_io": false, 00:12:48.619 "nvme_io_md": false, 00:12:48.619 "write_zeroes": true, 00:12:48.619 "zcopy": true, 00:12:48.619 "get_zone_info": false, 00:12:48.619 "zone_management": false, 00:12:48.619 "zone_append": false, 00:12:48.619 "compare": false, 00:12:48.619 "compare_and_write": false, 00:12:48.619 "abort": true, 00:12:48.619 "seek_hole": false, 00:12:48.619 "seek_data": false, 00:12:48.619 "copy": true, 00:12:48.619 "nvme_iov_md": false 00:12:48.619 }, 00:12:48.619 "memory_domains": [ 00:12:48.619 { 00:12:48.619 "dma_device_id": "system", 00:12:48.619 "dma_device_type": 1 00:12:48.619 }, 00:12:48.619 { 00:12:48.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.619 "dma_device_type": 2 00:12:48.619 } 00:12:48.619 ], 00:12:48.619 "driver_specific": {} 00:12:48.619 } 00:12:48.619 ] 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.619 BaseBdev3 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.619 [ 00:12:48.619 { 00:12:48.619 "name": "BaseBdev3", 00:12:48.619 "aliases": [ 00:12:48.619 "3cdb20e9-519c-40f7-b569-2b5a8b8c3b4f" 00:12:48.619 ], 00:12:48.619 "product_name": "Malloc disk", 00:12:48.619 "block_size": 512, 00:12:48.619 "num_blocks": 65536, 00:12:48.619 "uuid": "3cdb20e9-519c-40f7-b569-2b5a8b8c3b4f", 00:12:48.619 "assigned_rate_limits": { 00:12:48.619 "rw_ios_per_sec": 0, 00:12:48.619 "rw_mbytes_per_sec": 0, 00:12:48.619 "r_mbytes_per_sec": 0, 00:12:48.619 "w_mbytes_per_sec": 0 00:12:48.619 }, 00:12:48.619 "claimed": false, 00:12:48.619 "zoned": false, 00:12:48.619 "supported_io_types": { 00:12:48.619 "read": true, 00:12:48.619 "write": true, 00:12:48.619 "unmap": true, 00:12:48.619 "flush": true, 00:12:48.619 "reset": true, 00:12:48.619 "nvme_admin": false, 00:12:48.619 "nvme_io": false, 00:12:48.619 "nvme_io_md": false, 00:12:48.619 "write_zeroes": true, 00:12:48.619 "zcopy": true, 00:12:48.619 "get_zone_info": false, 00:12:48.619 "zone_management": false, 00:12:48.619 "zone_append": false, 00:12:48.619 "compare": false, 00:12:48.619 "compare_and_write": false, 00:12:48.619 "abort": true, 00:12:48.619 "seek_hole": false, 00:12:48.619 "seek_data": false, 00:12:48.619 "copy": true, 00:12:48.619 "nvme_iov_md": false 00:12:48.619 }, 00:12:48.619 "memory_domains": [ 00:12:48.619 { 00:12:48.619 "dma_device_id": "system", 00:12:48.619 "dma_device_type": 1 00:12:48.619 }, 00:12:48.619 { 00:12:48.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.619 "dma_device_type": 2 00:12:48.619 } 00:12:48.619 ], 00:12:48.619 "driver_specific": {} 00:12:48.619 } 00:12:48.619 ] 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.619 BaseBdev4 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.619 [ 00:12:48.619 { 00:12:48.619 "name": "BaseBdev4", 00:12:48.619 "aliases": [ 00:12:48.619 "3ac0d5ae-c6c1-4007-92d2-fc1afd779718" 00:12:48.619 ], 00:12:48.619 "product_name": "Malloc disk", 00:12:48.619 "block_size": 512, 00:12:48.619 "num_blocks": 65536, 00:12:48.619 "uuid": "3ac0d5ae-c6c1-4007-92d2-fc1afd779718", 00:12:48.619 "assigned_rate_limits": { 00:12:48.619 "rw_ios_per_sec": 0, 00:12:48.619 "rw_mbytes_per_sec": 0, 00:12:48.619 "r_mbytes_per_sec": 0, 00:12:48.619 "w_mbytes_per_sec": 0 00:12:48.619 }, 00:12:48.619 "claimed": false, 00:12:48.619 "zoned": false, 00:12:48.619 "supported_io_types": { 00:12:48.619 "read": true, 00:12:48.619 "write": true, 00:12:48.619 "unmap": true, 00:12:48.619 "flush": true, 00:12:48.619 "reset": true, 00:12:48.619 "nvme_admin": false, 00:12:48.619 "nvme_io": false, 00:12:48.619 "nvme_io_md": false, 00:12:48.619 "write_zeroes": true, 00:12:48.619 "zcopy": true, 00:12:48.619 "get_zone_info": false, 00:12:48.619 "zone_management": false, 00:12:48.619 "zone_append": false, 00:12:48.619 "compare": false, 00:12:48.619 "compare_and_write": false, 00:12:48.619 "abort": true, 00:12:48.619 "seek_hole": false, 00:12:48.619 "seek_data": false, 00:12:48.619 "copy": true, 00:12:48.619 "nvme_iov_md": false 00:12:48.619 }, 00:12:48.619 "memory_domains": [ 00:12:48.619 { 00:12:48.619 "dma_device_id": "system", 00:12:48.619 "dma_device_type": 1 00:12:48.619 }, 00:12:48.619 { 00:12:48.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.619 "dma_device_type": 2 00:12:48.619 } 00:12:48.619 ], 00:12:48.619 "driver_specific": {} 00:12:48.619 } 00:12:48.619 ] 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.619 [2024-11-20 14:23:27.537348] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:48.619 [2024-11-20 14:23:27.537412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:48.619 [2024-11-20 14:23:27.537453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:48.619 [2024-11-20 14:23:27.540316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:48.619 [2024-11-20 14:23:27.540417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.619 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.620 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:48.620 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.620 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.620 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.620 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.620 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.620 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.620 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.620 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.620 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.620 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.620 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.620 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.620 "name": "Existed_Raid", 00:12:48.620 "uuid": "ac39cab6-92ba-4b9e-811d-cc2b1f93d772", 00:12:48.620 "strip_size_kb": 64, 00:12:48.620 "state": "configuring", 00:12:48.620 "raid_level": "concat", 00:12:48.620 "superblock": true, 00:12:48.620 "num_base_bdevs": 4, 00:12:48.620 "num_base_bdevs_discovered": 3, 00:12:48.620 "num_base_bdevs_operational": 4, 00:12:48.620 "base_bdevs_list": [ 00:12:48.620 { 00:12:48.620 "name": "BaseBdev1", 00:12:48.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.620 "is_configured": false, 00:12:48.620 "data_offset": 0, 00:12:48.620 "data_size": 0 00:12:48.620 }, 00:12:48.620 { 00:12:48.620 "name": "BaseBdev2", 00:12:48.620 "uuid": "6df3ea6c-bcfd-4e04-81ff-2bd738f9cca3", 00:12:48.620 "is_configured": true, 00:12:48.620 "data_offset": 2048, 00:12:48.620 "data_size": 63488 00:12:48.620 }, 00:12:48.620 { 00:12:48.620 "name": "BaseBdev3", 00:12:48.620 "uuid": "3cdb20e9-519c-40f7-b569-2b5a8b8c3b4f", 00:12:48.620 "is_configured": true, 00:12:48.620 "data_offset": 2048, 00:12:48.620 "data_size": 63488 00:12:48.620 }, 00:12:48.620 { 00:12:48.620 "name": "BaseBdev4", 00:12:48.620 "uuid": "3ac0d5ae-c6c1-4007-92d2-fc1afd779718", 00:12:48.620 "is_configured": true, 00:12:48.620 "data_offset": 2048, 00:12:48.620 "data_size": 63488 00:12:48.620 } 00:12:48.620 ] 00:12:48.620 }' 00:12:48.620 14:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.620 14:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.187 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:49.187 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.187 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.187 [2024-11-20 14:23:28.053538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:49.187 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.187 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:49.187 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.187 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.187 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:49.187 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.187 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.187 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.187 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.187 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.187 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.187 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.187 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.187 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.187 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.187 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.187 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.187 "name": "Existed_Raid", 00:12:49.187 "uuid": "ac39cab6-92ba-4b9e-811d-cc2b1f93d772", 00:12:49.187 "strip_size_kb": 64, 00:12:49.187 "state": "configuring", 00:12:49.187 "raid_level": "concat", 00:12:49.187 "superblock": true, 00:12:49.187 "num_base_bdevs": 4, 00:12:49.187 "num_base_bdevs_discovered": 2, 00:12:49.187 "num_base_bdevs_operational": 4, 00:12:49.187 "base_bdevs_list": [ 00:12:49.187 { 00:12:49.187 "name": "BaseBdev1", 00:12:49.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.187 "is_configured": false, 00:12:49.187 "data_offset": 0, 00:12:49.187 "data_size": 0 00:12:49.187 }, 00:12:49.187 { 00:12:49.187 "name": null, 00:12:49.187 "uuid": "6df3ea6c-bcfd-4e04-81ff-2bd738f9cca3", 00:12:49.187 "is_configured": false, 00:12:49.187 "data_offset": 0, 00:12:49.187 "data_size": 63488 00:12:49.187 }, 00:12:49.187 { 00:12:49.187 "name": "BaseBdev3", 00:12:49.187 "uuid": "3cdb20e9-519c-40f7-b569-2b5a8b8c3b4f", 00:12:49.187 "is_configured": true, 00:12:49.187 "data_offset": 2048, 00:12:49.187 "data_size": 63488 00:12:49.187 }, 00:12:49.187 { 00:12:49.187 "name": "BaseBdev4", 00:12:49.187 "uuid": "3ac0d5ae-c6c1-4007-92d2-fc1afd779718", 00:12:49.187 "is_configured": true, 00:12:49.187 "data_offset": 2048, 00:12:49.187 "data_size": 63488 00:12:49.188 } 00:12:49.188 ] 00:12:49.188 }' 00:12:49.188 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.188 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.754 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:49.754 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.754 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.754 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.754 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.754 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:49.754 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:49.754 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.754 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.754 [2024-11-20 14:23:28.684507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:49.754 BaseBdev1 00:12:49.754 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.754 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:49.754 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:49.754 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:49.754 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:49.754 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:49.754 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:49.754 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:49.754 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.754 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.754 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.754 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:49.754 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.754 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.754 [ 00:12:49.754 { 00:12:49.754 "name": "BaseBdev1", 00:12:49.754 "aliases": [ 00:12:49.754 "7f664b70-5e33-4083-aba1-efa960c73d0d" 00:12:49.754 ], 00:12:49.754 "product_name": "Malloc disk", 00:12:49.754 "block_size": 512, 00:12:49.754 "num_blocks": 65536, 00:12:49.754 "uuid": "7f664b70-5e33-4083-aba1-efa960c73d0d", 00:12:49.754 "assigned_rate_limits": { 00:12:49.754 "rw_ios_per_sec": 0, 00:12:49.754 "rw_mbytes_per_sec": 0, 00:12:49.754 "r_mbytes_per_sec": 0, 00:12:49.754 "w_mbytes_per_sec": 0 00:12:49.754 }, 00:12:49.754 "claimed": true, 00:12:49.754 "claim_type": "exclusive_write", 00:12:49.754 "zoned": false, 00:12:49.755 "supported_io_types": { 00:12:49.755 "read": true, 00:12:49.755 "write": true, 00:12:49.755 "unmap": true, 00:12:49.755 "flush": true, 00:12:49.755 "reset": true, 00:12:49.755 "nvme_admin": false, 00:12:49.755 "nvme_io": false, 00:12:49.755 "nvme_io_md": false, 00:12:49.755 "write_zeroes": true, 00:12:49.755 "zcopy": true, 00:12:49.755 "get_zone_info": false, 00:12:49.755 "zone_management": false, 00:12:49.755 "zone_append": false, 00:12:49.755 "compare": false, 00:12:49.755 "compare_and_write": false, 00:12:49.755 "abort": true, 00:12:49.755 "seek_hole": false, 00:12:49.755 "seek_data": false, 00:12:49.755 "copy": true, 00:12:49.755 "nvme_iov_md": false 00:12:49.755 }, 00:12:49.755 "memory_domains": [ 00:12:49.755 { 00:12:49.755 "dma_device_id": "system", 00:12:49.755 "dma_device_type": 1 00:12:49.755 }, 00:12:49.755 { 00:12:49.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.755 "dma_device_type": 2 00:12:49.755 } 00:12:49.755 ], 00:12:49.755 "driver_specific": {} 00:12:49.755 } 00:12:49.755 ] 00:12:49.755 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.755 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:49.755 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:49.755 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.755 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.755 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:49.755 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.755 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.755 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.755 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.755 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.755 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.755 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.755 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.755 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.755 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.015 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.015 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.015 "name": "Existed_Raid", 00:12:50.015 "uuid": "ac39cab6-92ba-4b9e-811d-cc2b1f93d772", 00:12:50.015 "strip_size_kb": 64, 00:12:50.015 "state": "configuring", 00:12:50.015 "raid_level": "concat", 00:12:50.015 "superblock": true, 00:12:50.015 "num_base_bdevs": 4, 00:12:50.015 "num_base_bdevs_discovered": 3, 00:12:50.015 "num_base_bdevs_operational": 4, 00:12:50.015 "base_bdevs_list": [ 00:12:50.015 { 00:12:50.015 "name": "BaseBdev1", 00:12:50.015 "uuid": "7f664b70-5e33-4083-aba1-efa960c73d0d", 00:12:50.015 "is_configured": true, 00:12:50.015 "data_offset": 2048, 00:12:50.015 "data_size": 63488 00:12:50.015 }, 00:12:50.015 { 00:12:50.015 "name": null, 00:12:50.015 "uuid": "6df3ea6c-bcfd-4e04-81ff-2bd738f9cca3", 00:12:50.015 "is_configured": false, 00:12:50.015 "data_offset": 0, 00:12:50.015 "data_size": 63488 00:12:50.015 }, 00:12:50.015 { 00:12:50.015 "name": "BaseBdev3", 00:12:50.015 "uuid": "3cdb20e9-519c-40f7-b569-2b5a8b8c3b4f", 00:12:50.015 "is_configured": true, 00:12:50.015 "data_offset": 2048, 00:12:50.015 "data_size": 63488 00:12:50.015 }, 00:12:50.015 { 00:12:50.015 "name": "BaseBdev4", 00:12:50.015 "uuid": "3ac0d5ae-c6c1-4007-92d2-fc1afd779718", 00:12:50.015 "is_configured": true, 00:12:50.015 "data_offset": 2048, 00:12:50.015 "data_size": 63488 00:12:50.015 } 00:12:50.015 ] 00:12:50.015 }' 00:12:50.015 14:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.015 14:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.316 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.317 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:50.317 14:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.317 14:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.317 14:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.597 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:50.598 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:50.598 14:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.598 14:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.598 [2024-11-20 14:23:29.280809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:50.598 14:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.598 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:50.598 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.598 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.598 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:50.598 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.598 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.598 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.598 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.598 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.598 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.598 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.598 14:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.598 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.598 14:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.598 14:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.598 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.598 "name": "Existed_Raid", 00:12:50.598 "uuid": "ac39cab6-92ba-4b9e-811d-cc2b1f93d772", 00:12:50.598 "strip_size_kb": 64, 00:12:50.598 "state": "configuring", 00:12:50.598 "raid_level": "concat", 00:12:50.598 "superblock": true, 00:12:50.598 "num_base_bdevs": 4, 00:12:50.598 "num_base_bdevs_discovered": 2, 00:12:50.598 "num_base_bdevs_operational": 4, 00:12:50.598 "base_bdevs_list": [ 00:12:50.598 { 00:12:50.598 "name": "BaseBdev1", 00:12:50.598 "uuid": "7f664b70-5e33-4083-aba1-efa960c73d0d", 00:12:50.598 "is_configured": true, 00:12:50.598 "data_offset": 2048, 00:12:50.598 "data_size": 63488 00:12:50.598 }, 00:12:50.598 { 00:12:50.598 "name": null, 00:12:50.598 "uuid": "6df3ea6c-bcfd-4e04-81ff-2bd738f9cca3", 00:12:50.598 "is_configured": false, 00:12:50.598 "data_offset": 0, 00:12:50.598 "data_size": 63488 00:12:50.598 }, 00:12:50.598 { 00:12:50.598 "name": null, 00:12:50.598 "uuid": "3cdb20e9-519c-40f7-b569-2b5a8b8c3b4f", 00:12:50.598 "is_configured": false, 00:12:50.598 "data_offset": 0, 00:12:50.598 "data_size": 63488 00:12:50.598 }, 00:12:50.598 { 00:12:50.598 "name": "BaseBdev4", 00:12:50.598 "uuid": "3ac0d5ae-c6c1-4007-92d2-fc1afd779718", 00:12:50.598 "is_configured": true, 00:12:50.598 "data_offset": 2048, 00:12:50.598 "data_size": 63488 00:12:50.598 } 00:12:50.598 ] 00:12:50.598 }' 00:12:50.598 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.598 14:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.857 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.857 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:50.857 14:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.857 14:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.857 14:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.116 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:51.116 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:51.116 14:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.116 14:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.116 [2024-11-20 14:23:29.844936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:51.116 14:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.116 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:51.116 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.116 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.116 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:51.117 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.117 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.117 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.117 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.117 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.117 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.117 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.117 14:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.117 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.117 14:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.117 14:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.117 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.117 "name": "Existed_Raid", 00:12:51.117 "uuid": "ac39cab6-92ba-4b9e-811d-cc2b1f93d772", 00:12:51.117 "strip_size_kb": 64, 00:12:51.117 "state": "configuring", 00:12:51.117 "raid_level": "concat", 00:12:51.117 "superblock": true, 00:12:51.117 "num_base_bdevs": 4, 00:12:51.117 "num_base_bdevs_discovered": 3, 00:12:51.117 "num_base_bdevs_operational": 4, 00:12:51.117 "base_bdevs_list": [ 00:12:51.117 { 00:12:51.117 "name": "BaseBdev1", 00:12:51.117 "uuid": "7f664b70-5e33-4083-aba1-efa960c73d0d", 00:12:51.117 "is_configured": true, 00:12:51.117 "data_offset": 2048, 00:12:51.117 "data_size": 63488 00:12:51.117 }, 00:12:51.117 { 00:12:51.117 "name": null, 00:12:51.117 "uuid": "6df3ea6c-bcfd-4e04-81ff-2bd738f9cca3", 00:12:51.117 "is_configured": false, 00:12:51.117 "data_offset": 0, 00:12:51.117 "data_size": 63488 00:12:51.117 }, 00:12:51.117 { 00:12:51.117 "name": "BaseBdev3", 00:12:51.117 "uuid": "3cdb20e9-519c-40f7-b569-2b5a8b8c3b4f", 00:12:51.117 "is_configured": true, 00:12:51.117 "data_offset": 2048, 00:12:51.117 "data_size": 63488 00:12:51.117 }, 00:12:51.117 { 00:12:51.117 "name": "BaseBdev4", 00:12:51.117 "uuid": "3ac0d5ae-c6c1-4007-92d2-fc1afd779718", 00:12:51.117 "is_configured": true, 00:12:51.117 "data_offset": 2048, 00:12:51.117 "data_size": 63488 00:12:51.117 } 00:12:51.117 ] 00:12:51.117 }' 00:12:51.117 14:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.117 14:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.684 [2024-11-20 14:23:30.441236] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.684 "name": "Existed_Raid", 00:12:51.684 "uuid": "ac39cab6-92ba-4b9e-811d-cc2b1f93d772", 00:12:51.684 "strip_size_kb": 64, 00:12:51.684 "state": "configuring", 00:12:51.684 "raid_level": "concat", 00:12:51.684 "superblock": true, 00:12:51.684 "num_base_bdevs": 4, 00:12:51.684 "num_base_bdevs_discovered": 2, 00:12:51.684 "num_base_bdevs_operational": 4, 00:12:51.684 "base_bdevs_list": [ 00:12:51.684 { 00:12:51.684 "name": null, 00:12:51.684 "uuid": "7f664b70-5e33-4083-aba1-efa960c73d0d", 00:12:51.684 "is_configured": false, 00:12:51.684 "data_offset": 0, 00:12:51.684 "data_size": 63488 00:12:51.684 }, 00:12:51.684 { 00:12:51.684 "name": null, 00:12:51.684 "uuid": "6df3ea6c-bcfd-4e04-81ff-2bd738f9cca3", 00:12:51.684 "is_configured": false, 00:12:51.684 "data_offset": 0, 00:12:51.684 "data_size": 63488 00:12:51.684 }, 00:12:51.684 { 00:12:51.684 "name": "BaseBdev3", 00:12:51.684 "uuid": "3cdb20e9-519c-40f7-b569-2b5a8b8c3b4f", 00:12:51.684 "is_configured": true, 00:12:51.684 "data_offset": 2048, 00:12:51.684 "data_size": 63488 00:12:51.684 }, 00:12:51.684 { 00:12:51.684 "name": "BaseBdev4", 00:12:51.684 "uuid": "3ac0d5ae-c6c1-4007-92d2-fc1afd779718", 00:12:51.684 "is_configured": true, 00:12:51.684 "data_offset": 2048, 00:12:51.684 "data_size": 63488 00:12:51.684 } 00:12:51.684 ] 00:12:51.684 }' 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.684 14:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.258 [2024-11-20 14:23:31.120072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.258 "name": "Existed_Raid", 00:12:52.258 "uuid": "ac39cab6-92ba-4b9e-811d-cc2b1f93d772", 00:12:52.258 "strip_size_kb": 64, 00:12:52.258 "state": "configuring", 00:12:52.258 "raid_level": "concat", 00:12:52.258 "superblock": true, 00:12:52.258 "num_base_bdevs": 4, 00:12:52.258 "num_base_bdevs_discovered": 3, 00:12:52.258 "num_base_bdevs_operational": 4, 00:12:52.258 "base_bdevs_list": [ 00:12:52.258 { 00:12:52.258 "name": null, 00:12:52.258 "uuid": "7f664b70-5e33-4083-aba1-efa960c73d0d", 00:12:52.258 "is_configured": false, 00:12:52.258 "data_offset": 0, 00:12:52.258 "data_size": 63488 00:12:52.258 }, 00:12:52.258 { 00:12:52.258 "name": "BaseBdev2", 00:12:52.258 "uuid": "6df3ea6c-bcfd-4e04-81ff-2bd738f9cca3", 00:12:52.258 "is_configured": true, 00:12:52.258 "data_offset": 2048, 00:12:52.258 "data_size": 63488 00:12:52.258 }, 00:12:52.258 { 00:12:52.258 "name": "BaseBdev3", 00:12:52.258 "uuid": "3cdb20e9-519c-40f7-b569-2b5a8b8c3b4f", 00:12:52.258 "is_configured": true, 00:12:52.258 "data_offset": 2048, 00:12:52.258 "data_size": 63488 00:12:52.258 }, 00:12:52.258 { 00:12:52.258 "name": "BaseBdev4", 00:12:52.258 "uuid": "3ac0d5ae-c6c1-4007-92d2-fc1afd779718", 00:12:52.258 "is_configured": true, 00:12:52.258 "data_offset": 2048, 00:12:52.258 "data_size": 63488 00:12:52.258 } 00:12:52.258 ] 00:12:52.258 }' 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.258 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7f664b70-5e33-4083-aba1-efa960c73d0d 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.826 [2024-11-20 14:23:31.738401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:52.826 [2024-11-20 14:23:31.738724] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:52.826 [2024-11-20 14:23:31.738779] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:52.826 NewBaseBdev 00:12:52.826 [2024-11-20 14:23:31.739166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:52.826 [2024-11-20 14:23:31.739382] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:52.826 [2024-11-20 14:23:31.739415] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:52.826 [2024-11-20 14:23:31.739584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.826 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.826 [ 00:12:52.826 { 00:12:52.826 "name": "NewBaseBdev", 00:12:52.826 "aliases": [ 00:12:52.826 "7f664b70-5e33-4083-aba1-efa960c73d0d" 00:12:52.826 ], 00:12:52.826 "product_name": "Malloc disk", 00:12:52.826 "block_size": 512, 00:12:52.826 "num_blocks": 65536, 00:12:52.826 "uuid": "7f664b70-5e33-4083-aba1-efa960c73d0d", 00:12:52.827 "assigned_rate_limits": { 00:12:52.827 "rw_ios_per_sec": 0, 00:12:52.827 "rw_mbytes_per_sec": 0, 00:12:52.827 "r_mbytes_per_sec": 0, 00:12:52.827 "w_mbytes_per_sec": 0 00:12:52.827 }, 00:12:52.827 "claimed": true, 00:12:52.827 "claim_type": "exclusive_write", 00:12:52.827 "zoned": false, 00:12:52.827 "supported_io_types": { 00:12:52.827 "read": true, 00:12:52.827 "write": true, 00:12:52.827 "unmap": true, 00:12:52.827 "flush": true, 00:12:52.827 "reset": true, 00:12:52.827 "nvme_admin": false, 00:12:52.827 "nvme_io": false, 00:12:52.827 "nvme_io_md": false, 00:12:52.827 "write_zeroes": true, 00:12:52.827 "zcopy": true, 00:12:52.827 "get_zone_info": false, 00:12:52.827 "zone_management": false, 00:12:52.827 "zone_append": false, 00:12:52.827 "compare": false, 00:12:52.827 "compare_and_write": false, 00:12:52.827 "abort": true, 00:12:52.827 "seek_hole": false, 00:12:52.827 "seek_data": false, 00:12:52.827 "copy": true, 00:12:52.827 "nvme_iov_md": false 00:12:52.827 }, 00:12:52.827 "memory_domains": [ 00:12:52.827 { 00:12:52.827 "dma_device_id": "system", 00:12:52.827 "dma_device_type": 1 00:12:52.827 }, 00:12:52.827 { 00:12:52.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.827 "dma_device_type": 2 00:12:52.827 } 00:12:52.827 ], 00:12:52.827 "driver_specific": {} 00:12:52.827 } 00:12:52.827 ] 00:12:52.827 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.827 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:52.827 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:52.827 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.827 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.827 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:52.827 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.827 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:52.827 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.827 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.827 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.827 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.827 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.827 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.827 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.827 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.827 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.086 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.086 "name": "Existed_Raid", 00:12:53.086 "uuid": "ac39cab6-92ba-4b9e-811d-cc2b1f93d772", 00:12:53.086 "strip_size_kb": 64, 00:12:53.086 "state": "online", 00:12:53.086 "raid_level": "concat", 00:12:53.086 "superblock": true, 00:12:53.086 "num_base_bdevs": 4, 00:12:53.086 "num_base_bdevs_discovered": 4, 00:12:53.086 "num_base_bdevs_operational": 4, 00:12:53.086 "base_bdevs_list": [ 00:12:53.086 { 00:12:53.086 "name": "NewBaseBdev", 00:12:53.086 "uuid": "7f664b70-5e33-4083-aba1-efa960c73d0d", 00:12:53.086 "is_configured": true, 00:12:53.086 "data_offset": 2048, 00:12:53.086 "data_size": 63488 00:12:53.086 }, 00:12:53.086 { 00:12:53.086 "name": "BaseBdev2", 00:12:53.086 "uuid": "6df3ea6c-bcfd-4e04-81ff-2bd738f9cca3", 00:12:53.086 "is_configured": true, 00:12:53.086 "data_offset": 2048, 00:12:53.086 "data_size": 63488 00:12:53.086 }, 00:12:53.086 { 00:12:53.086 "name": "BaseBdev3", 00:12:53.086 "uuid": "3cdb20e9-519c-40f7-b569-2b5a8b8c3b4f", 00:12:53.086 "is_configured": true, 00:12:53.086 "data_offset": 2048, 00:12:53.086 "data_size": 63488 00:12:53.086 }, 00:12:53.086 { 00:12:53.086 "name": "BaseBdev4", 00:12:53.086 "uuid": "3ac0d5ae-c6c1-4007-92d2-fc1afd779718", 00:12:53.086 "is_configured": true, 00:12:53.086 "data_offset": 2048, 00:12:53.086 "data_size": 63488 00:12:53.086 } 00:12:53.086 ] 00:12:53.086 }' 00:12:53.086 14:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.086 14:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.345 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:53.345 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:53.345 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:53.345 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:53.345 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:53.345 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:53.345 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:53.345 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.345 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.345 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:53.345 [2024-11-20 14:23:32.207352] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:53.345 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.345 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:53.345 "name": "Existed_Raid", 00:12:53.345 "aliases": [ 00:12:53.345 "ac39cab6-92ba-4b9e-811d-cc2b1f93d772" 00:12:53.345 ], 00:12:53.345 "product_name": "Raid Volume", 00:12:53.345 "block_size": 512, 00:12:53.345 "num_blocks": 253952, 00:12:53.345 "uuid": "ac39cab6-92ba-4b9e-811d-cc2b1f93d772", 00:12:53.345 "assigned_rate_limits": { 00:12:53.345 "rw_ios_per_sec": 0, 00:12:53.345 "rw_mbytes_per_sec": 0, 00:12:53.345 "r_mbytes_per_sec": 0, 00:12:53.345 "w_mbytes_per_sec": 0 00:12:53.345 }, 00:12:53.345 "claimed": false, 00:12:53.345 "zoned": false, 00:12:53.345 "supported_io_types": { 00:12:53.345 "read": true, 00:12:53.345 "write": true, 00:12:53.345 "unmap": true, 00:12:53.345 "flush": true, 00:12:53.345 "reset": true, 00:12:53.345 "nvme_admin": false, 00:12:53.345 "nvme_io": false, 00:12:53.345 "nvme_io_md": false, 00:12:53.345 "write_zeroes": true, 00:12:53.345 "zcopy": false, 00:12:53.345 "get_zone_info": false, 00:12:53.345 "zone_management": false, 00:12:53.345 "zone_append": false, 00:12:53.345 "compare": false, 00:12:53.345 "compare_and_write": false, 00:12:53.345 "abort": false, 00:12:53.345 "seek_hole": false, 00:12:53.345 "seek_data": false, 00:12:53.345 "copy": false, 00:12:53.345 "nvme_iov_md": false 00:12:53.345 }, 00:12:53.345 "memory_domains": [ 00:12:53.345 { 00:12:53.345 "dma_device_id": "system", 00:12:53.345 "dma_device_type": 1 00:12:53.345 }, 00:12:53.345 { 00:12:53.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.345 "dma_device_type": 2 00:12:53.345 }, 00:12:53.345 { 00:12:53.345 "dma_device_id": "system", 00:12:53.345 "dma_device_type": 1 00:12:53.345 }, 00:12:53.345 { 00:12:53.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.345 "dma_device_type": 2 00:12:53.345 }, 00:12:53.345 { 00:12:53.345 "dma_device_id": "system", 00:12:53.345 "dma_device_type": 1 00:12:53.345 }, 00:12:53.345 { 00:12:53.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.345 "dma_device_type": 2 00:12:53.345 }, 00:12:53.345 { 00:12:53.345 "dma_device_id": "system", 00:12:53.345 "dma_device_type": 1 00:12:53.345 }, 00:12:53.345 { 00:12:53.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.345 "dma_device_type": 2 00:12:53.345 } 00:12:53.345 ], 00:12:53.345 "driver_specific": { 00:12:53.345 "raid": { 00:12:53.345 "uuid": "ac39cab6-92ba-4b9e-811d-cc2b1f93d772", 00:12:53.345 "strip_size_kb": 64, 00:12:53.345 "state": "online", 00:12:53.345 "raid_level": "concat", 00:12:53.345 "superblock": true, 00:12:53.345 "num_base_bdevs": 4, 00:12:53.345 "num_base_bdevs_discovered": 4, 00:12:53.345 "num_base_bdevs_operational": 4, 00:12:53.345 "base_bdevs_list": [ 00:12:53.345 { 00:12:53.346 "name": "NewBaseBdev", 00:12:53.346 "uuid": "7f664b70-5e33-4083-aba1-efa960c73d0d", 00:12:53.346 "is_configured": true, 00:12:53.346 "data_offset": 2048, 00:12:53.346 "data_size": 63488 00:12:53.346 }, 00:12:53.346 { 00:12:53.346 "name": "BaseBdev2", 00:12:53.346 "uuid": "6df3ea6c-bcfd-4e04-81ff-2bd738f9cca3", 00:12:53.346 "is_configured": true, 00:12:53.346 "data_offset": 2048, 00:12:53.346 "data_size": 63488 00:12:53.346 }, 00:12:53.346 { 00:12:53.346 "name": "BaseBdev3", 00:12:53.346 "uuid": "3cdb20e9-519c-40f7-b569-2b5a8b8c3b4f", 00:12:53.346 "is_configured": true, 00:12:53.346 "data_offset": 2048, 00:12:53.346 "data_size": 63488 00:12:53.346 }, 00:12:53.346 { 00:12:53.346 "name": "BaseBdev4", 00:12:53.346 "uuid": "3ac0d5ae-c6c1-4007-92d2-fc1afd779718", 00:12:53.346 "is_configured": true, 00:12:53.346 "data_offset": 2048, 00:12:53.346 "data_size": 63488 00:12:53.346 } 00:12:53.346 ] 00:12:53.346 } 00:12:53.346 } 00:12:53.346 }' 00:12:53.346 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:53.346 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:53.346 BaseBdev2 00:12:53.346 BaseBdev3 00:12:53.346 BaseBdev4' 00:12:53.346 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.605 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:53.605 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:53.605 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:53.605 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.605 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.605 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.605 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.605 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:53.605 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:53.605 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:53.605 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:53.605 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.605 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.605 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.605 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.605 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.606 [2024-11-20 14:23:32.570926] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:53.606 [2024-11-20 14:23:32.570966] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:53.606 [2024-11-20 14:23:32.571117] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:53.606 [2024-11-20 14:23:32.571233] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:53.606 [2024-11-20 14:23:32.571262] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72078 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72078 ']' 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72078 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:53.606 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72078 00:12:53.908 killing process with pid 72078 00:12:53.908 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:53.908 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:53.908 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72078' 00:12:53.908 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72078 00:12:53.908 [2024-11-20 14:23:32.608954] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:53.908 14:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72078 00:12:54.187 [2024-11-20 14:23:32.951822] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:55.124 ************************************ 00:12:55.124 END TEST raid_state_function_test_sb 00:12:55.124 ************************************ 00:12:55.124 14:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:55.124 00:12:55.124 real 0m12.599s 00:12:55.124 user 0m20.928s 00:12:55.124 sys 0m1.615s 00:12:55.124 14:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.124 14:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.124 14:23:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:12:55.124 14:23:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:55.124 14:23:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.124 14:23:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:55.124 ************************************ 00:12:55.124 START TEST raid_superblock_test 00:12:55.124 ************************************ 00:12:55.124 14:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:12:55.124 14:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:55.124 14:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:55.124 14:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:55.124 14:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:55.124 14:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:55.124 14:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:55.124 14:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:55.124 14:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:55.124 14:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:55.125 14:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:55.125 14:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:55.125 14:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:55.125 14:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:55.125 14:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:55.125 14:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:55.125 14:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:55.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.125 14:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72754 00:12:55.125 14:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72754 00:12:55.125 14:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:55.125 14:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72754 ']' 00:12:55.125 14:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.125 14:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.125 14:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.125 14:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.125 14:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.384 [2024-11-20 14:23:34.201543] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:12:55.384 [2024-11-20 14:23:34.202462] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72754 ] 00:12:55.642 [2024-11-20 14:23:34.378383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.642 [2024-11-20 14:23:34.509884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.901 [2024-11-20 14:23:34.713074] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.902 [2024-11-20 14:23:34.713153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.469 malloc1 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.469 [2024-11-20 14:23:35.224215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:56.469 [2024-11-20 14:23:35.224343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.469 [2024-11-20 14:23:35.224403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:56.469 [2024-11-20 14:23:35.224423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.469 [2024-11-20 14:23:35.227661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.469 [2024-11-20 14:23:35.227740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:56.469 pt1 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.469 malloc2 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.469 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.469 [2024-11-20 14:23:35.281590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:56.469 [2024-11-20 14:23:35.281701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.469 [2024-11-20 14:23:35.281753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:56.469 [2024-11-20 14:23:35.281773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.469 [2024-11-20 14:23:35.284905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.470 [2024-11-20 14:23:35.285185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:56.470 pt2 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.470 malloc3 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.470 [2024-11-20 14:23:35.344321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:56.470 [2024-11-20 14:23:35.344413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.470 [2024-11-20 14:23:35.344457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:56.470 [2024-11-20 14:23:35.344477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.470 [2024-11-20 14:23:35.347553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.470 [2024-11-20 14:23:35.347612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:56.470 pt3 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.470 malloc4 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.470 [2024-11-20 14:23:35.401286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:56.470 [2024-11-20 14:23:35.401633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.470 [2024-11-20 14:23:35.401693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:56.470 [2024-11-20 14:23:35.401714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.470 [2024-11-20 14:23:35.404736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.470 [2024-11-20 14:23:35.404924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:56.470 pt4 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.470 [2024-11-20 14:23:35.413402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:56.470 [2024-11-20 14:23:35.416075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:56.470 [2024-11-20 14:23:35.416232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:56.470 [2024-11-20 14:23:35.416320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:56.470 [2024-11-20 14:23:35.416623] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:56.470 [2024-11-20 14:23:35.416646] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:56.470 [2024-11-20 14:23:35.417064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:56.470 [2024-11-20 14:23:35.417327] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:56.470 [2024-11-20 14:23:35.417354] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:56.470 [2024-11-20 14:23:35.417665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.470 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.729 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.729 "name": "raid_bdev1", 00:12:56.729 "uuid": "9c4f5b6d-3e4d-4a7d-9d53-5e0a93df1043", 00:12:56.729 "strip_size_kb": 64, 00:12:56.729 "state": "online", 00:12:56.729 "raid_level": "concat", 00:12:56.729 "superblock": true, 00:12:56.729 "num_base_bdevs": 4, 00:12:56.729 "num_base_bdevs_discovered": 4, 00:12:56.729 "num_base_bdevs_operational": 4, 00:12:56.729 "base_bdevs_list": [ 00:12:56.729 { 00:12:56.729 "name": "pt1", 00:12:56.729 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:56.729 "is_configured": true, 00:12:56.729 "data_offset": 2048, 00:12:56.729 "data_size": 63488 00:12:56.729 }, 00:12:56.729 { 00:12:56.729 "name": "pt2", 00:12:56.729 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.729 "is_configured": true, 00:12:56.729 "data_offset": 2048, 00:12:56.729 "data_size": 63488 00:12:56.729 }, 00:12:56.729 { 00:12:56.729 "name": "pt3", 00:12:56.729 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:56.729 "is_configured": true, 00:12:56.729 "data_offset": 2048, 00:12:56.729 "data_size": 63488 00:12:56.729 }, 00:12:56.729 { 00:12:56.729 "name": "pt4", 00:12:56.729 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:56.729 "is_configured": true, 00:12:56.729 "data_offset": 2048, 00:12:56.729 "data_size": 63488 00:12:56.729 } 00:12:56.729 ] 00:12:56.729 }' 00:12:56.729 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.729 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.988 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:56.988 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:56.988 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:56.988 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:56.988 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:56.988 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:56.988 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:56.988 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.988 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:56.988 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.988 [2024-11-20 14:23:35.938173] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:56.988 14:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.249 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:57.249 "name": "raid_bdev1", 00:12:57.249 "aliases": [ 00:12:57.249 "9c4f5b6d-3e4d-4a7d-9d53-5e0a93df1043" 00:12:57.249 ], 00:12:57.249 "product_name": "Raid Volume", 00:12:57.249 "block_size": 512, 00:12:57.249 "num_blocks": 253952, 00:12:57.249 "uuid": "9c4f5b6d-3e4d-4a7d-9d53-5e0a93df1043", 00:12:57.250 "assigned_rate_limits": { 00:12:57.250 "rw_ios_per_sec": 0, 00:12:57.250 "rw_mbytes_per_sec": 0, 00:12:57.250 "r_mbytes_per_sec": 0, 00:12:57.250 "w_mbytes_per_sec": 0 00:12:57.250 }, 00:12:57.250 "claimed": false, 00:12:57.250 "zoned": false, 00:12:57.250 "supported_io_types": { 00:12:57.250 "read": true, 00:12:57.250 "write": true, 00:12:57.250 "unmap": true, 00:12:57.250 "flush": true, 00:12:57.250 "reset": true, 00:12:57.250 "nvme_admin": false, 00:12:57.250 "nvme_io": false, 00:12:57.250 "nvme_io_md": false, 00:12:57.250 "write_zeroes": true, 00:12:57.250 "zcopy": false, 00:12:57.250 "get_zone_info": false, 00:12:57.250 "zone_management": false, 00:12:57.250 "zone_append": false, 00:12:57.250 "compare": false, 00:12:57.250 "compare_and_write": false, 00:12:57.250 "abort": false, 00:12:57.250 "seek_hole": false, 00:12:57.250 "seek_data": false, 00:12:57.250 "copy": false, 00:12:57.250 "nvme_iov_md": false 00:12:57.250 }, 00:12:57.250 "memory_domains": [ 00:12:57.250 { 00:12:57.250 "dma_device_id": "system", 00:12:57.250 "dma_device_type": 1 00:12:57.250 }, 00:12:57.250 { 00:12:57.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.250 "dma_device_type": 2 00:12:57.250 }, 00:12:57.250 { 00:12:57.250 "dma_device_id": "system", 00:12:57.250 "dma_device_type": 1 00:12:57.250 }, 00:12:57.250 { 00:12:57.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.250 "dma_device_type": 2 00:12:57.250 }, 00:12:57.250 { 00:12:57.250 "dma_device_id": "system", 00:12:57.250 "dma_device_type": 1 00:12:57.250 }, 00:12:57.250 { 00:12:57.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.250 "dma_device_type": 2 00:12:57.250 }, 00:12:57.250 { 00:12:57.250 "dma_device_id": "system", 00:12:57.250 "dma_device_type": 1 00:12:57.250 }, 00:12:57.250 { 00:12:57.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.250 "dma_device_type": 2 00:12:57.250 } 00:12:57.250 ], 00:12:57.250 "driver_specific": { 00:12:57.250 "raid": { 00:12:57.250 "uuid": "9c4f5b6d-3e4d-4a7d-9d53-5e0a93df1043", 00:12:57.250 "strip_size_kb": 64, 00:12:57.250 "state": "online", 00:12:57.250 "raid_level": "concat", 00:12:57.250 "superblock": true, 00:12:57.250 "num_base_bdevs": 4, 00:12:57.250 "num_base_bdevs_discovered": 4, 00:12:57.250 "num_base_bdevs_operational": 4, 00:12:57.250 "base_bdevs_list": [ 00:12:57.250 { 00:12:57.250 "name": "pt1", 00:12:57.250 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.250 "is_configured": true, 00:12:57.250 "data_offset": 2048, 00:12:57.250 "data_size": 63488 00:12:57.250 }, 00:12:57.250 { 00:12:57.250 "name": "pt2", 00:12:57.250 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.250 "is_configured": true, 00:12:57.250 "data_offset": 2048, 00:12:57.250 "data_size": 63488 00:12:57.250 }, 00:12:57.250 { 00:12:57.250 "name": "pt3", 00:12:57.250 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:57.250 "is_configured": true, 00:12:57.250 "data_offset": 2048, 00:12:57.250 "data_size": 63488 00:12:57.250 }, 00:12:57.250 { 00:12:57.250 "name": "pt4", 00:12:57.250 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:57.250 "is_configured": true, 00:12:57.250 "data_offset": 2048, 00:12:57.250 "data_size": 63488 00:12:57.250 } 00:12:57.250 ] 00:12:57.250 } 00:12:57.250 } 00:12:57.250 }' 00:12:57.250 14:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:57.250 pt2 00:12:57.250 pt3 00:12:57.250 pt4' 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.250 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:57.515 [2024-11-20 14:23:36.258213] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9c4f5b6d-3e4d-4a7d-9d53-5e0a93df1043 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9c4f5b6d-3e4d-4a7d-9d53-5e0a93df1043 ']' 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.515 [2024-11-20 14:23:36.329858] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:57.515 [2024-11-20 14:23:36.329895] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:57.515 [2024-11-20 14:23:36.330020] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:57.515 [2024-11-20 14:23:36.330136] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:57.515 [2024-11-20 14:23:36.330167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:57.515 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.516 [2024-11-20 14:23:36.477908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:57.516 [2024-11-20 14:23:36.480438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:57.516 [2024-11-20 14:23:36.480516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:57.516 [2024-11-20 14:23:36.480582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:57.516 [2024-11-20 14:23:36.480670] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:57.516 [2024-11-20 14:23:36.480758] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:57.516 [2024-11-20 14:23:36.480800] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:57.516 [2024-11-20 14:23:36.480842] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:57.516 [2024-11-20 14:23:36.480870] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:57.516 [2024-11-20 14:23:36.480890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:57.516 request: 00:12:57.516 { 00:12:57.516 "name": "raid_bdev1", 00:12:57.516 "raid_level": "concat", 00:12:57.516 "base_bdevs": [ 00:12:57.516 "malloc1", 00:12:57.516 "malloc2", 00:12:57.516 "malloc3", 00:12:57.516 "malloc4" 00:12:57.516 ], 00:12:57.516 "strip_size_kb": 64, 00:12:57.516 "superblock": false, 00:12:57.516 "method": "bdev_raid_create", 00:12:57.516 "req_id": 1 00:12:57.516 } 00:12:57.516 Got JSON-RPC error response 00:12:57.516 response: 00:12:57.516 { 00:12:57.516 "code": -17, 00:12:57.516 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:57.516 } 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:57.516 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.776 [2024-11-20 14:23:36.557892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:57.776 [2024-11-20 14:23:36.558119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.776 [2024-11-20 14:23:36.558280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:57.776 [2024-11-20 14:23:36.558478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.776 [2024-11-20 14:23:36.561545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.776 [2024-11-20 14:23:36.561748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:57.776 [2024-11-20 14:23:36.562044] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:57.776 [2024-11-20 14:23:36.562277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:57.776 pt1 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.776 "name": "raid_bdev1", 00:12:57.776 "uuid": "9c4f5b6d-3e4d-4a7d-9d53-5e0a93df1043", 00:12:57.776 "strip_size_kb": 64, 00:12:57.776 "state": "configuring", 00:12:57.776 "raid_level": "concat", 00:12:57.776 "superblock": true, 00:12:57.776 "num_base_bdevs": 4, 00:12:57.776 "num_base_bdevs_discovered": 1, 00:12:57.776 "num_base_bdevs_operational": 4, 00:12:57.776 "base_bdevs_list": [ 00:12:57.776 { 00:12:57.776 "name": "pt1", 00:12:57.776 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.776 "is_configured": true, 00:12:57.776 "data_offset": 2048, 00:12:57.776 "data_size": 63488 00:12:57.776 }, 00:12:57.776 { 00:12:57.776 "name": null, 00:12:57.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.776 "is_configured": false, 00:12:57.776 "data_offset": 2048, 00:12:57.776 "data_size": 63488 00:12:57.776 }, 00:12:57.776 { 00:12:57.776 "name": null, 00:12:57.776 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:57.776 "is_configured": false, 00:12:57.776 "data_offset": 2048, 00:12:57.776 "data_size": 63488 00:12:57.776 }, 00:12:57.776 { 00:12:57.776 "name": null, 00:12:57.776 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:57.776 "is_configured": false, 00:12:57.776 "data_offset": 2048, 00:12:57.776 "data_size": 63488 00:12:57.776 } 00:12:57.776 ] 00:12:57.776 }' 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.776 14:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.343 [2024-11-20 14:23:37.066280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:58.343 [2024-11-20 14:23:37.066534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.343 [2024-11-20 14:23:37.066619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:58.343 [2024-11-20 14:23:37.066875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.343 [2024-11-20 14:23:37.067683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.343 [2024-11-20 14:23:37.067862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:58.343 [2024-11-20 14:23:37.068019] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:58.343 [2024-11-20 14:23:37.068075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:58.343 pt2 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.343 [2024-11-20 14:23:37.074253] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.343 "name": "raid_bdev1", 00:12:58.343 "uuid": "9c4f5b6d-3e4d-4a7d-9d53-5e0a93df1043", 00:12:58.343 "strip_size_kb": 64, 00:12:58.343 "state": "configuring", 00:12:58.343 "raid_level": "concat", 00:12:58.343 "superblock": true, 00:12:58.343 "num_base_bdevs": 4, 00:12:58.343 "num_base_bdevs_discovered": 1, 00:12:58.343 "num_base_bdevs_operational": 4, 00:12:58.343 "base_bdevs_list": [ 00:12:58.343 { 00:12:58.343 "name": "pt1", 00:12:58.343 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:58.343 "is_configured": true, 00:12:58.343 "data_offset": 2048, 00:12:58.343 "data_size": 63488 00:12:58.343 }, 00:12:58.343 { 00:12:58.343 "name": null, 00:12:58.343 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:58.343 "is_configured": false, 00:12:58.343 "data_offset": 0, 00:12:58.343 "data_size": 63488 00:12:58.343 }, 00:12:58.343 { 00:12:58.343 "name": null, 00:12:58.343 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:58.343 "is_configured": false, 00:12:58.343 "data_offset": 2048, 00:12:58.343 "data_size": 63488 00:12:58.343 }, 00:12:58.343 { 00:12:58.343 "name": null, 00:12:58.343 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:58.343 "is_configured": false, 00:12:58.343 "data_offset": 2048, 00:12:58.343 "data_size": 63488 00:12:58.343 } 00:12:58.343 ] 00:12:58.343 }' 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.343 14:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.602 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:58.602 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:58.602 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:58.602 14:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.602 14:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.861 [2024-11-20 14:23:37.582405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:58.861 [2024-11-20 14:23:37.582509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.861 [2024-11-20 14:23:37.582548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:58.861 [2024-11-20 14:23:37.582567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.861 [2024-11-20 14:23:37.583176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.861 [2024-11-20 14:23:37.583221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:58.861 [2024-11-20 14:23:37.583340] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:58.861 [2024-11-20 14:23:37.583377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:58.861 pt2 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.861 [2024-11-20 14:23:37.590366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:58.861 [2024-11-20 14:23:37.590435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.861 [2024-11-20 14:23:37.590468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:58.861 [2024-11-20 14:23:37.590485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.861 [2024-11-20 14:23:37.590951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.861 [2024-11-20 14:23:37.591011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:58.861 [2024-11-20 14:23:37.591105] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:58.861 [2024-11-20 14:23:37.591148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:58.861 pt3 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.861 [2024-11-20 14:23:37.598343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:58.861 [2024-11-20 14:23:37.598451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.861 [2024-11-20 14:23:37.598484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:58.861 [2024-11-20 14:23:37.598501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.861 [2024-11-20 14:23:37.598992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.861 [2024-11-20 14:23:37.599040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:58.861 [2024-11-20 14:23:37.599135] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:58.861 [2024-11-20 14:23:37.599174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:58.861 [2024-11-20 14:23:37.599365] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:58.861 [2024-11-20 14:23:37.599399] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:58.861 [2024-11-20 14:23:37.599709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:58.861 [2024-11-20 14:23:37.599915] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:58.861 [2024-11-20 14:23:37.599950] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:58.861 [2024-11-20 14:23:37.600140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.861 pt4 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.861 "name": "raid_bdev1", 00:12:58.861 "uuid": "9c4f5b6d-3e4d-4a7d-9d53-5e0a93df1043", 00:12:58.861 "strip_size_kb": 64, 00:12:58.861 "state": "online", 00:12:58.861 "raid_level": "concat", 00:12:58.861 "superblock": true, 00:12:58.861 "num_base_bdevs": 4, 00:12:58.861 "num_base_bdevs_discovered": 4, 00:12:58.861 "num_base_bdevs_operational": 4, 00:12:58.861 "base_bdevs_list": [ 00:12:58.861 { 00:12:58.861 "name": "pt1", 00:12:58.861 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:58.861 "is_configured": true, 00:12:58.861 "data_offset": 2048, 00:12:58.861 "data_size": 63488 00:12:58.861 }, 00:12:58.861 { 00:12:58.861 "name": "pt2", 00:12:58.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:58.861 "is_configured": true, 00:12:58.861 "data_offset": 2048, 00:12:58.861 "data_size": 63488 00:12:58.861 }, 00:12:58.861 { 00:12:58.861 "name": "pt3", 00:12:58.861 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:58.861 "is_configured": true, 00:12:58.861 "data_offset": 2048, 00:12:58.861 "data_size": 63488 00:12:58.861 }, 00:12:58.861 { 00:12:58.861 "name": "pt4", 00:12:58.861 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:58.861 "is_configured": true, 00:12:58.861 "data_offset": 2048, 00:12:58.861 "data_size": 63488 00:12:58.861 } 00:12:58.861 ] 00:12:58.861 }' 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.861 14:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:59.430 [2024-11-20 14:23:38.131286] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:59.430 "name": "raid_bdev1", 00:12:59.430 "aliases": [ 00:12:59.430 "9c4f5b6d-3e4d-4a7d-9d53-5e0a93df1043" 00:12:59.430 ], 00:12:59.430 "product_name": "Raid Volume", 00:12:59.430 "block_size": 512, 00:12:59.430 "num_blocks": 253952, 00:12:59.430 "uuid": "9c4f5b6d-3e4d-4a7d-9d53-5e0a93df1043", 00:12:59.430 "assigned_rate_limits": { 00:12:59.430 "rw_ios_per_sec": 0, 00:12:59.430 "rw_mbytes_per_sec": 0, 00:12:59.430 "r_mbytes_per_sec": 0, 00:12:59.430 "w_mbytes_per_sec": 0 00:12:59.430 }, 00:12:59.430 "claimed": false, 00:12:59.430 "zoned": false, 00:12:59.430 "supported_io_types": { 00:12:59.430 "read": true, 00:12:59.430 "write": true, 00:12:59.430 "unmap": true, 00:12:59.430 "flush": true, 00:12:59.430 "reset": true, 00:12:59.430 "nvme_admin": false, 00:12:59.430 "nvme_io": false, 00:12:59.430 "nvme_io_md": false, 00:12:59.430 "write_zeroes": true, 00:12:59.430 "zcopy": false, 00:12:59.430 "get_zone_info": false, 00:12:59.430 "zone_management": false, 00:12:59.430 "zone_append": false, 00:12:59.430 "compare": false, 00:12:59.430 "compare_and_write": false, 00:12:59.430 "abort": false, 00:12:59.430 "seek_hole": false, 00:12:59.430 "seek_data": false, 00:12:59.430 "copy": false, 00:12:59.430 "nvme_iov_md": false 00:12:59.430 }, 00:12:59.430 "memory_domains": [ 00:12:59.430 { 00:12:59.430 "dma_device_id": "system", 00:12:59.430 "dma_device_type": 1 00:12:59.430 }, 00:12:59.430 { 00:12:59.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.430 "dma_device_type": 2 00:12:59.430 }, 00:12:59.430 { 00:12:59.430 "dma_device_id": "system", 00:12:59.430 "dma_device_type": 1 00:12:59.430 }, 00:12:59.430 { 00:12:59.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.430 "dma_device_type": 2 00:12:59.430 }, 00:12:59.430 { 00:12:59.430 "dma_device_id": "system", 00:12:59.430 "dma_device_type": 1 00:12:59.430 }, 00:12:59.430 { 00:12:59.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.430 "dma_device_type": 2 00:12:59.430 }, 00:12:59.430 { 00:12:59.430 "dma_device_id": "system", 00:12:59.430 "dma_device_type": 1 00:12:59.430 }, 00:12:59.430 { 00:12:59.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.430 "dma_device_type": 2 00:12:59.430 } 00:12:59.430 ], 00:12:59.430 "driver_specific": { 00:12:59.430 "raid": { 00:12:59.430 "uuid": "9c4f5b6d-3e4d-4a7d-9d53-5e0a93df1043", 00:12:59.430 "strip_size_kb": 64, 00:12:59.430 "state": "online", 00:12:59.430 "raid_level": "concat", 00:12:59.430 "superblock": true, 00:12:59.430 "num_base_bdevs": 4, 00:12:59.430 "num_base_bdevs_discovered": 4, 00:12:59.430 "num_base_bdevs_operational": 4, 00:12:59.430 "base_bdevs_list": [ 00:12:59.430 { 00:12:59.430 "name": "pt1", 00:12:59.430 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:59.430 "is_configured": true, 00:12:59.430 "data_offset": 2048, 00:12:59.430 "data_size": 63488 00:12:59.430 }, 00:12:59.430 { 00:12:59.430 "name": "pt2", 00:12:59.430 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:59.430 "is_configured": true, 00:12:59.430 "data_offset": 2048, 00:12:59.430 "data_size": 63488 00:12:59.430 }, 00:12:59.430 { 00:12:59.430 "name": "pt3", 00:12:59.430 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:59.430 "is_configured": true, 00:12:59.430 "data_offset": 2048, 00:12:59.430 "data_size": 63488 00:12:59.430 }, 00:12:59.430 { 00:12:59.430 "name": "pt4", 00:12:59.430 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:59.430 "is_configured": true, 00:12:59.430 "data_offset": 2048, 00:12:59.430 "data_size": 63488 00:12:59.430 } 00:12:59.430 ] 00:12:59.430 } 00:12:59.430 } 00:12:59.430 }' 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:59.430 pt2 00:12:59.430 pt3 00:12:59.430 pt4' 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.430 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:59.431 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.431 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.431 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.431 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.689 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.689 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.689 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.689 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:59.689 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.690 [2024-11-20 14:23:38.495302] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9c4f5b6d-3e4d-4a7d-9d53-5e0a93df1043 '!=' 9c4f5b6d-3e4d-4a7d-9d53-5e0a93df1043 ']' 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72754 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72754 ']' 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72754 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72754 00:12:59.690 killing process with pid 72754 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72754' 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72754 00:12:59.690 [2024-11-20 14:23:38.573032] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:59.690 [2024-11-20 14:23:38.573136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.690 14:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72754 00:12:59.690 [2024-11-20 14:23:38.573241] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.690 [2024-11-20 14:23:38.573259] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:00.260 [2024-11-20 14:23:38.937720] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:01.198 ************************************ 00:13:01.198 END TEST raid_superblock_test 00:13:01.198 ************************************ 00:13:01.198 14:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:01.198 00:13:01.198 real 0m5.954s 00:13:01.198 user 0m8.894s 00:13:01.198 sys 0m0.870s 00:13:01.198 14:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.198 14:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.198 14:23:40 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:13:01.198 14:23:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:01.198 14:23:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:01.198 14:23:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:01.198 ************************************ 00:13:01.198 START TEST raid_read_error_test 00:13:01.198 ************************************ 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.t2S21Vmjp0 00:13:01.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73024 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73024 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73024 ']' 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.198 14:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:01.199 14:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.199 14:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:01.199 14:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.199 [2024-11-20 14:23:40.165597] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:13:01.199 [2024-11-20 14:23:40.165767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73024 ] 00:13:01.457 [2024-11-20 14:23:40.355109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.714 [2024-11-20 14:23:40.535163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.972 [2024-11-20 14:23:40.805102] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.972 [2024-11-20 14:23:40.805196] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.540 BaseBdev1_malloc 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.540 true 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.540 [2024-11-20 14:23:41.275871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:02.540 [2024-11-20 14:23:41.275956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.540 [2024-11-20 14:23:41.275986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:02.540 [2024-11-20 14:23:41.276041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.540 [2024-11-20 14:23:41.278920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.540 [2024-11-20 14:23:41.278975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:02.540 BaseBdev1 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.540 BaseBdev2_malloc 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.540 true 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.540 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.540 [2024-11-20 14:23:41.332705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:02.541 [2024-11-20 14:23:41.332779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.541 [2024-11-20 14:23:41.332807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:02.541 [2024-11-20 14:23:41.332824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.541 [2024-11-20 14:23:41.335617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.541 [2024-11-20 14:23:41.335671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:02.541 BaseBdev2 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.541 BaseBdev3_malloc 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.541 true 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.541 [2024-11-20 14:23:41.397146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:02.541 [2024-11-20 14:23:41.397362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.541 [2024-11-20 14:23:41.397402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:02.541 [2024-11-20 14:23:41.397423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.541 [2024-11-20 14:23:41.400271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.541 [2024-11-20 14:23:41.400325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:02.541 BaseBdev3 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.541 BaseBdev4_malloc 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.541 true 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.541 [2024-11-20 14:23:41.457485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:02.541 [2024-11-20 14:23:41.457558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.541 [2024-11-20 14:23:41.457591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:02.541 [2024-11-20 14:23:41.457609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.541 [2024-11-20 14:23:41.460399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.541 [2024-11-20 14:23:41.460492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:02.541 BaseBdev4 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.541 [2024-11-20 14:23:41.465587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:02.541 [2024-11-20 14:23:41.468255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:02.541 [2024-11-20 14:23:41.468366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:02.541 [2024-11-20 14:23:41.468484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:02.541 [2024-11-20 14:23:41.468781] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:02.541 [2024-11-20 14:23:41.468811] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:02.541 [2024-11-20 14:23:41.469299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:02.541 [2024-11-20 14:23:41.469688] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:02.541 [2024-11-20 14:23:41.469823] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:02.541 [2024-11-20 14:23:41.470280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.541 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.799 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.799 "name": "raid_bdev1", 00:13:02.799 "uuid": "38b3f3f5-6eed-471c-91f3-d424975f192d", 00:13:02.799 "strip_size_kb": 64, 00:13:02.799 "state": "online", 00:13:02.799 "raid_level": "concat", 00:13:02.799 "superblock": true, 00:13:02.799 "num_base_bdevs": 4, 00:13:02.799 "num_base_bdevs_discovered": 4, 00:13:02.799 "num_base_bdevs_operational": 4, 00:13:02.799 "base_bdevs_list": [ 00:13:02.799 { 00:13:02.799 "name": "BaseBdev1", 00:13:02.799 "uuid": "3c710f4e-6a98-5f9a-85a6-be05dcd4f651", 00:13:02.799 "is_configured": true, 00:13:02.799 "data_offset": 2048, 00:13:02.799 "data_size": 63488 00:13:02.799 }, 00:13:02.799 { 00:13:02.799 "name": "BaseBdev2", 00:13:02.799 "uuid": "02a828b1-9250-55ea-8022-cfd836451647", 00:13:02.799 "is_configured": true, 00:13:02.799 "data_offset": 2048, 00:13:02.800 "data_size": 63488 00:13:02.800 }, 00:13:02.800 { 00:13:02.800 "name": "BaseBdev3", 00:13:02.800 "uuid": "28bc7a67-ac6c-5372-a6bf-0c6e8bd2d067", 00:13:02.800 "is_configured": true, 00:13:02.800 "data_offset": 2048, 00:13:02.800 "data_size": 63488 00:13:02.800 }, 00:13:02.800 { 00:13:02.800 "name": "BaseBdev4", 00:13:02.800 "uuid": "1416a565-8d50-5d1d-aa79-c9bdb7b25856", 00:13:02.800 "is_configured": true, 00:13:02.800 "data_offset": 2048, 00:13:02.800 "data_size": 63488 00:13:02.800 } 00:13:02.800 ] 00:13:02.800 }' 00:13:02.800 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.800 14:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.058 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:03.058 14:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:03.316 [2024-11-20 14:23:42.071792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:04.251 14:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:04.251 14:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.251 14:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.251 14:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.251 14:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:04.251 14:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:04.251 14:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:04.251 14:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:04.251 14:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.251 14:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.251 14:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:04.251 14:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.251 14:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:04.251 14:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.251 14:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.251 14:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.251 14:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.251 14:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.251 14:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.251 14:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.251 14:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.251 14:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.251 14:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.251 "name": "raid_bdev1", 00:13:04.251 "uuid": "38b3f3f5-6eed-471c-91f3-d424975f192d", 00:13:04.251 "strip_size_kb": 64, 00:13:04.251 "state": "online", 00:13:04.251 "raid_level": "concat", 00:13:04.251 "superblock": true, 00:13:04.251 "num_base_bdevs": 4, 00:13:04.251 "num_base_bdevs_discovered": 4, 00:13:04.251 "num_base_bdevs_operational": 4, 00:13:04.251 "base_bdevs_list": [ 00:13:04.251 { 00:13:04.251 "name": "BaseBdev1", 00:13:04.252 "uuid": "3c710f4e-6a98-5f9a-85a6-be05dcd4f651", 00:13:04.252 "is_configured": true, 00:13:04.252 "data_offset": 2048, 00:13:04.252 "data_size": 63488 00:13:04.252 }, 00:13:04.252 { 00:13:04.252 "name": "BaseBdev2", 00:13:04.252 "uuid": "02a828b1-9250-55ea-8022-cfd836451647", 00:13:04.252 "is_configured": true, 00:13:04.252 "data_offset": 2048, 00:13:04.252 "data_size": 63488 00:13:04.252 }, 00:13:04.252 { 00:13:04.252 "name": "BaseBdev3", 00:13:04.252 "uuid": "28bc7a67-ac6c-5372-a6bf-0c6e8bd2d067", 00:13:04.252 "is_configured": true, 00:13:04.252 "data_offset": 2048, 00:13:04.252 "data_size": 63488 00:13:04.252 }, 00:13:04.252 { 00:13:04.252 "name": "BaseBdev4", 00:13:04.252 "uuid": "1416a565-8d50-5d1d-aa79-c9bdb7b25856", 00:13:04.252 "is_configured": true, 00:13:04.252 "data_offset": 2048, 00:13:04.252 "data_size": 63488 00:13:04.252 } 00:13:04.252 ] 00:13:04.252 }' 00:13:04.252 14:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.252 14:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.878 14:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:04.878 14:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.878 14:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.878 [2024-11-20 14:23:43.535793] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:04.878 [2024-11-20 14:23:43.535842] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:04.878 { 00:13:04.878 "results": [ 00:13:04.878 { 00:13:04.878 "job": "raid_bdev1", 00:13:04.878 "core_mask": "0x1", 00:13:04.878 "workload": "randrw", 00:13:04.878 "percentage": 50, 00:13:04.878 "status": "finished", 00:13:04.878 "queue_depth": 1, 00:13:04.878 "io_size": 131072, 00:13:04.878 "runtime": 1.461119, 00:13:04.878 "iops": 10408.460912492412, 00:13:04.878 "mibps": 1301.0576140615515, 00:13:04.878 "io_failed": 1, 00:13:04.878 "io_timeout": 0, 00:13:04.878 "avg_latency_us": 133.436490116498, 00:13:04.878 "min_latency_us": 41.658181818181816, 00:13:04.878 "max_latency_us": 1809.6872727272728 00:13:04.878 } 00:13:04.878 ], 00:13:04.878 "core_count": 1 00:13:04.878 } 00:13:04.878 [2024-11-20 14:23:43.539396] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:04.878 [2024-11-20 14:23:43.539475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.878 [2024-11-20 14:23:43.539536] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:04.878 [2024-11-20 14:23:43.539555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:04.878 14:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.878 14:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73024 00:13:04.878 14:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73024 ']' 00:13:04.878 14:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73024 00:13:04.878 14:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:04.878 14:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:04.878 14:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73024 00:13:04.878 killing process with pid 73024 00:13:04.878 14:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:04.878 14:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:04.878 14:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73024' 00:13:04.878 14:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73024 00:13:04.878 14:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73024 00:13:04.878 [2024-11-20 14:23:43.577009] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:05.136 [2024-11-20 14:23:43.899306] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:06.512 14:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.t2S21Vmjp0 00:13:06.512 14:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:06.512 14:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:06.512 14:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.68 00:13:06.512 14:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:06.512 14:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:06.512 14:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:06.512 14:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.68 != \0\.\0\0 ]] 00:13:06.512 00:13:06.512 real 0m5.069s 00:13:06.512 user 0m6.218s 00:13:06.512 sys 0m0.630s 00:13:06.512 14:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.512 ************************************ 00:13:06.512 END TEST raid_read_error_test 00:13:06.512 ************************************ 00:13:06.512 14:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.512 14:23:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:13:06.512 14:23:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:06.512 14:23:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.512 14:23:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:06.512 ************************************ 00:13:06.512 START TEST raid_write_error_test 00:13:06.512 ************************************ 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Hlm3LizfnT 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73170 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73170 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73170 ']' 00:13:06.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.512 14:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.512 [2024-11-20 14:23:45.306430] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:13:06.512 [2024-11-20 14:23:45.306678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73170 ] 00:13:06.771 [2024-11-20 14:23:45.505135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.771 [2024-11-20 14:23:45.686727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.029 [2024-11-20 14:23:45.962247] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:07.029 [2024-11-20 14:23:45.962332] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.596 BaseBdev1_malloc 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.596 true 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.596 [2024-11-20 14:23:46.414999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:07.596 [2024-11-20 14:23:46.415088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.596 [2024-11-20 14:23:46.415135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:07.596 [2024-11-20 14:23:46.415161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.596 [2024-11-20 14:23:46.418242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.596 [2024-11-20 14:23:46.418298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:07.596 BaseBdev1 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.596 BaseBdev2_malloc 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.596 true 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.596 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:07.597 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.597 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.597 [2024-11-20 14:23:46.473243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:07.597 [2024-11-20 14:23:46.473346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.597 [2024-11-20 14:23:46.473373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:07.597 [2024-11-20 14:23:46.473391] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.597 [2024-11-20 14:23:46.476244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.597 [2024-11-20 14:23:46.476432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:07.597 BaseBdev2 00:13:07.597 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.597 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:07.597 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:07.597 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.597 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.597 BaseBdev3_malloc 00:13:07.597 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.597 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:07.597 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.597 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.597 true 00:13:07.597 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.597 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:07.597 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.597 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.597 [2024-11-20 14:23:46.544123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:07.597 [2024-11-20 14:23:46.544192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.597 [2024-11-20 14:23:46.544220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:07.597 [2024-11-20 14:23:46.544238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.597 [2024-11-20 14:23:46.547079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.597 [2024-11-20 14:23:46.547129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:07.597 BaseBdev3 00:13:07.597 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.597 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:07.597 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:07.597 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.597 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.857 BaseBdev4_malloc 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.857 true 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.857 [2024-11-20 14:23:46.604535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:07.857 [2024-11-20 14:23:46.604603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.857 [2024-11-20 14:23:46.604631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:07.857 [2024-11-20 14:23:46.604650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.857 [2024-11-20 14:23:46.607411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.857 [2024-11-20 14:23:46.607467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:07.857 BaseBdev4 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.857 [2024-11-20 14:23:46.612645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:07.857 [2024-11-20 14:23:46.615059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:07.857 [2024-11-20 14:23:46.615175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:07.857 [2024-11-20 14:23:46.615293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:07.857 [2024-11-20 14:23:46.615672] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:07.857 [2024-11-20 14:23:46.615696] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:07.857 [2024-11-20 14:23:46.616028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:07.857 [2024-11-20 14:23:46.616283] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:07.857 [2024-11-20 14:23:46.616304] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:07.857 [2024-11-20 14:23:46.616583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.857 "name": "raid_bdev1", 00:13:07.857 "uuid": "21c67760-eca3-4dd3-8d23-35154365d3e7", 00:13:07.857 "strip_size_kb": 64, 00:13:07.857 "state": "online", 00:13:07.857 "raid_level": "concat", 00:13:07.857 "superblock": true, 00:13:07.857 "num_base_bdevs": 4, 00:13:07.857 "num_base_bdevs_discovered": 4, 00:13:07.857 "num_base_bdevs_operational": 4, 00:13:07.857 "base_bdevs_list": [ 00:13:07.857 { 00:13:07.857 "name": "BaseBdev1", 00:13:07.857 "uuid": "3a4a87fe-58f6-5e22-8034-ce04a9f954a5", 00:13:07.857 "is_configured": true, 00:13:07.857 "data_offset": 2048, 00:13:07.857 "data_size": 63488 00:13:07.857 }, 00:13:07.857 { 00:13:07.857 "name": "BaseBdev2", 00:13:07.857 "uuid": "288e5bd1-79be-546e-9665-70c6d7d3f6df", 00:13:07.857 "is_configured": true, 00:13:07.857 "data_offset": 2048, 00:13:07.857 "data_size": 63488 00:13:07.857 }, 00:13:07.857 { 00:13:07.857 "name": "BaseBdev3", 00:13:07.857 "uuid": "c2036056-019a-5396-a572-726ed2aa021d", 00:13:07.857 "is_configured": true, 00:13:07.857 "data_offset": 2048, 00:13:07.857 "data_size": 63488 00:13:07.857 }, 00:13:07.857 { 00:13:07.857 "name": "BaseBdev4", 00:13:07.857 "uuid": "4f3a06a7-4157-5618-abcf-53fe57fad07f", 00:13:07.857 "is_configured": true, 00:13:07.857 "data_offset": 2048, 00:13:07.857 "data_size": 63488 00:13:07.857 } 00:13:07.857 ] 00:13:07.857 }' 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.857 14:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.426 14:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:08.426 14:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:08.426 [2024-11-20 14:23:47.314278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:09.363 14:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:09.363 14:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.363 14:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.363 14:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.363 14:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:09.363 14:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:09.363 14:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:09.363 14:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:09.363 14:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.363 14:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.363 14:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:09.363 14:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.363 14:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.363 14:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.363 14:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.363 14:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.363 14:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.363 14:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.363 14:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.363 14:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.363 14:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.363 14:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.363 14:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.363 "name": "raid_bdev1", 00:13:09.363 "uuid": "21c67760-eca3-4dd3-8d23-35154365d3e7", 00:13:09.363 "strip_size_kb": 64, 00:13:09.363 "state": "online", 00:13:09.363 "raid_level": "concat", 00:13:09.363 "superblock": true, 00:13:09.363 "num_base_bdevs": 4, 00:13:09.363 "num_base_bdevs_discovered": 4, 00:13:09.363 "num_base_bdevs_operational": 4, 00:13:09.363 "base_bdevs_list": [ 00:13:09.363 { 00:13:09.363 "name": "BaseBdev1", 00:13:09.363 "uuid": "3a4a87fe-58f6-5e22-8034-ce04a9f954a5", 00:13:09.363 "is_configured": true, 00:13:09.363 "data_offset": 2048, 00:13:09.363 "data_size": 63488 00:13:09.363 }, 00:13:09.363 { 00:13:09.363 "name": "BaseBdev2", 00:13:09.363 "uuid": "288e5bd1-79be-546e-9665-70c6d7d3f6df", 00:13:09.363 "is_configured": true, 00:13:09.363 "data_offset": 2048, 00:13:09.363 "data_size": 63488 00:13:09.363 }, 00:13:09.363 { 00:13:09.363 "name": "BaseBdev3", 00:13:09.363 "uuid": "c2036056-019a-5396-a572-726ed2aa021d", 00:13:09.363 "is_configured": true, 00:13:09.363 "data_offset": 2048, 00:13:09.364 "data_size": 63488 00:13:09.364 }, 00:13:09.364 { 00:13:09.364 "name": "BaseBdev4", 00:13:09.364 "uuid": "4f3a06a7-4157-5618-abcf-53fe57fad07f", 00:13:09.364 "is_configured": true, 00:13:09.364 "data_offset": 2048, 00:13:09.364 "data_size": 63488 00:13:09.364 } 00:13:09.364 ] 00:13:09.364 }' 00:13:09.364 14:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.364 14:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.959 14:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:09.959 14:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.959 14:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.959 [2024-11-20 14:23:48.680925] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:09.959 [2024-11-20 14:23:48.681129] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:09.959 [2024-11-20 14:23:48.684561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.959 [2024-11-20 14:23:48.684780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.959 [2024-11-20 14:23:48.684887] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:09.959 [2024-11-20 14:23:48.685082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:09.959 { 00:13:09.959 "results": [ 00:13:09.959 { 00:13:09.959 "job": "raid_bdev1", 00:13:09.959 "core_mask": "0x1", 00:13:09.959 "workload": "randrw", 00:13:09.959 "percentage": 50, 00:13:09.959 "status": "finished", 00:13:09.959 "queue_depth": 1, 00:13:09.959 "io_size": 131072, 00:13:09.959 "runtime": 1.364178, 00:13:09.959 "iops": 10577.065456267437, 00:13:09.959 "mibps": 1322.1331820334296, 00:13:09.959 "io_failed": 1, 00:13:09.959 "io_timeout": 0, 00:13:09.959 "avg_latency_us": 131.5738716058716, 00:13:09.959 "min_latency_us": 39.09818181818182, 00:13:09.959 "max_latency_us": 1839.4763636363637 00:13:09.959 } 00:13:09.959 ], 00:13:09.959 "core_count": 1 00:13:09.959 } 00:13:09.959 14:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.959 14:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73170 00:13:09.960 14:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73170 ']' 00:13:09.960 14:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73170 00:13:09.960 14:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:09.960 14:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.960 14:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73170 00:13:09.960 killing process with pid 73170 00:13:09.960 14:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:09.960 14:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:09.960 14:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73170' 00:13:09.960 14:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73170 00:13:09.960 14:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73170 00:13:09.960 [2024-11-20 14:23:48.717481] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:10.218 [2024-11-20 14:23:49.007561] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:11.154 14:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Hlm3LizfnT 00:13:11.154 14:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:11.154 14:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:11.414 14:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:13:11.414 14:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:11.414 ************************************ 00:13:11.414 END TEST raid_write_error_test 00:13:11.414 ************************************ 00:13:11.414 14:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:11.414 14:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:11.414 14:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:13:11.414 00:13:11.414 real 0m4.963s 00:13:11.414 user 0m6.191s 00:13:11.414 sys 0m0.610s 00:13:11.414 14:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.414 14:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.414 14:23:50 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:11.414 14:23:50 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:13:11.414 14:23:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:11.414 14:23:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.414 14:23:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:11.414 ************************************ 00:13:11.414 START TEST raid_state_function_test 00:13:11.414 ************************************ 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73319 00:13:11.414 Process raid pid: 73319 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73319' 00:13:11.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73319 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73319 ']' 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.414 14:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.414 [2024-11-20 14:23:50.303978] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:13:11.414 [2024-11-20 14:23:50.304411] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.673 [2024-11-20 14:23:50.488633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.673 [2024-11-20 14:23:50.621236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.932 [2024-11-20 14:23:50.828675] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.932 [2024-11-20 14:23:50.828731] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.499 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:12.499 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:12.499 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:12.499 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.499 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.499 [2024-11-20 14:23:51.364573] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:12.499 [2024-11-20 14:23:51.364658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:12.499 [2024-11-20 14:23:51.364676] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:12.500 [2024-11-20 14:23:51.364692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:12.500 [2024-11-20 14:23:51.364702] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:12.500 [2024-11-20 14:23:51.364716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:12.500 [2024-11-20 14:23:51.364725] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:12.500 [2024-11-20 14:23:51.364751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:12.500 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.500 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:12.500 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.500 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.500 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.500 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.500 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:12.500 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.500 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.500 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.500 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.500 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.500 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.500 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.500 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.500 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.500 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.500 "name": "Existed_Raid", 00:13:12.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.500 "strip_size_kb": 0, 00:13:12.500 "state": "configuring", 00:13:12.500 "raid_level": "raid1", 00:13:12.500 "superblock": false, 00:13:12.500 "num_base_bdevs": 4, 00:13:12.500 "num_base_bdevs_discovered": 0, 00:13:12.500 "num_base_bdevs_operational": 4, 00:13:12.500 "base_bdevs_list": [ 00:13:12.500 { 00:13:12.500 "name": "BaseBdev1", 00:13:12.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.500 "is_configured": false, 00:13:12.500 "data_offset": 0, 00:13:12.500 "data_size": 0 00:13:12.500 }, 00:13:12.500 { 00:13:12.500 "name": "BaseBdev2", 00:13:12.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.500 "is_configured": false, 00:13:12.500 "data_offset": 0, 00:13:12.500 "data_size": 0 00:13:12.500 }, 00:13:12.500 { 00:13:12.500 "name": "BaseBdev3", 00:13:12.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.500 "is_configured": false, 00:13:12.500 "data_offset": 0, 00:13:12.500 "data_size": 0 00:13:12.500 }, 00:13:12.500 { 00:13:12.500 "name": "BaseBdev4", 00:13:12.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.500 "is_configured": false, 00:13:12.500 "data_offset": 0, 00:13:12.500 "data_size": 0 00:13:12.500 } 00:13:12.500 ] 00:13:12.500 }' 00:13:12.500 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.500 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.090 [2024-11-20 14:23:51.889712] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:13.090 [2024-11-20 14:23:51.889762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.090 [2024-11-20 14:23:51.897696] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:13.090 [2024-11-20 14:23:51.897767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:13.090 [2024-11-20 14:23:51.897783] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:13.090 [2024-11-20 14:23:51.897800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:13.090 [2024-11-20 14:23:51.897810] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:13.090 [2024-11-20 14:23:51.897825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:13.090 [2024-11-20 14:23:51.897835] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:13.090 [2024-11-20 14:23:51.897849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.090 [2024-11-20 14:23:51.944702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:13.090 BaseBdev1 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.090 [ 00:13:13.090 { 00:13:13.090 "name": "BaseBdev1", 00:13:13.090 "aliases": [ 00:13:13.090 "2ab59404-ba8e-4618-b9bc-6f8e9113aac2" 00:13:13.090 ], 00:13:13.090 "product_name": "Malloc disk", 00:13:13.090 "block_size": 512, 00:13:13.090 "num_blocks": 65536, 00:13:13.090 "uuid": "2ab59404-ba8e-4618-b9bc-6f8e9113aac2", 00:13:13.090 "assigned_rate_limits": { 00:13:13.090 "rw_ios_per_sec": 0, 00:13:13.090 "rw_mbytes_per_sec": 0, 00:13:13.090 "r_mbytes_per_sec": 0, 00:13:13.090 "w_mbytes_per_sec": 0 00:13:13.090 }, 00:13:13.090 "claimed": true, 00:13:13.090 "claim_type": "exclusive_write", 00:13:13.090 "zoned": false, 00:13:13.090 "supported_io_types": { 00:13:13.090 "read": true, 00:13:13.090 "write": true, 00:13:13.090 "unmap": true, 00:13:13.090 "flush": true, 00:13:13.090 "reset": true, 00:13:13.090 "nvme_admin": false, 00:13:13.090 "nvme_io": false, 00:13:13.090 "nvme_io_md": false, 00:13:13.090 "write_zeroes": true, 00:13:13.090 "zcopy": true, 00:13:13.090 "get_zone_info": false, 00:13:13.090 "zone_management": false, 00:13:13.090 "zone_append": false, 00:13:13.090 "compare": false, 00:13:13.090 "compare_and_write": false, 00:13:13.090 "abort": true, 00:13:13.090 "seek_hole": false, 00:13:13.090 "seek_data": false, 00:13:13.090 "copy": true, 00:13:13.090 "nvme_iov_md": false 00:13:13.090 }, 00:13:13.090 "memory_domains": [ 00:13:13.090 { 00:13:13.090 "dma_device_id": "system", 00:13:13.090 "dma_device_type": 1 00:13:13.090 }, 00:13:13.090 { 00:13:13.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.090 "dma_device_type": 2 00:13:13.090 } 00:13:13.090 ], 00:13:13.090 "driver_specific": {} 00:13:13.090 } 00:13:13.090 ] 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.090 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.091 14:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.091 14:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.091 14:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.091 14:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.091 "name": "Existed_Raid", 00:13:13.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.091 "strip_size_kb": 0, 00:13:13.091 "state": "configuring", 00:13:13.091 "raid_level": "raid1", 00:13:13.091 "superblock": false, 00:13:13.091 "num_base_bdevs": 4, 00:13:13.091 "num_base_bdevs_discovered": 1, 00:13:13.091 "num_base_bdevs_operational": 4, 00:13:13.091 "base_bdevs_list": [ 00:13:13.091 { 00:13:13.091 "name": "BaseBdev1", 00:13:13.091 "uuid": "2ab59404-ba8e-4618-b9bc-6f8e9113aac2", 00:13:13.091 "is_configured": true, 00:13:13.091 "data_offset": 0, 00:13:13.091 "data_size": 65536 00:13:13.091 }, 00:13:13.091 { 00:13:13.091 "name": "BaseBdev2", 00:13:13.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.091 "is_configured": false, 00:13:13.091 "data_offset": 0, 00:13:13.091 "data_size": 0 00:13:13.091 }, 00:13:13.091 { 00:13:13.091 "name": "BaseBdev3", 00:13:13.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.091 "is_configured": false, 00:13:13.091 "data_offset": 0, 00:13:13.091 "data_size": 0 00:13:13.091 }, 00:13:13.091 { 00:13:13.091 "name": "BaseBdev4", 00:13:13.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.091 "is_configured": false, 00:13:13.091 "data_offset": 0, 00:13:13.091 "data_size": 0 00:13:13.091 } 00:13:13.091 ] 00:13:13.091 }' 00:13:13.091 14:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.091 14:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.699 [2024-11-20 14:23:52.508936] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:13.699 [2024-11-20 14:23:52.509163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.699 [2024-11-20 14:23:52.516981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:13.699 [2024-11-20 14:23:52.519490] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:13.699 [2024-11-20 14:23:52.519577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:13.699 [2024-11-20 14:23:52.519595] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:13.699 [2024-11-20 14:23:52.519612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:13.699 [2024-11-20 14:23:52.519622] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:13.699 [2024-11-20 14:23:52.519635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.699 14:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.699 "name": "Existed_Raid", 00:13:13.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.699 "strip_size_kb": 0, 00:13:13.699 "state": "configuring", 00:13:13.699 "raid_level": "raid1", 00:13:13.699 "superblock": false, 00:13:13.699 "num_base_bdevs": 4, 00:13:13.699 "num_base_bdevs_discovered": 1, 00:13:13.699 "num_base_bdevs_operational": 4, 00:13:13.699 "base_bdevs_list": [ 00:13:13.699 { 00:13:13.699 "name": "BaseBdev1", 00:13:13.699 "uuid": "2ab59404-ba8e-4618-b9bc-6f8e9113aac2", 00:13:13.700 "is_configured": true, 00:13:13.700 "data_offset": 0, 00:13:13.700 "data_size": 65536 00:13:13.700 }, 00:13:13.700 { 00:13:13.700 "name": "BaseBdev2", 00:13:13.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.700 "is_configured": false, 00:13:13.700 "data_offset": 0, 00:13:13.700 "data_size": 0 00:13:13.700 }, 00:13:13.700 { 00:13:13.700 "name": "BaseBdev3", 00:13:13.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.700 "is_configured": false, 00:13:13.700 "data_offset": 0, 00:13:13.700 "data_size": 0 00:13:13.700 }, 00:13:13.700 { 00:13:13.700 "name": "BaseBdev4", 00:13:13.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.700 "is_configured": false, 00:13:13.700 "data_offset": 0, 00:13:13.700 "data_size": 0 00:13:13.700 } 00:13:13.700 ] 00:13:13.700 }' 00:13:13.700 14:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.700 14:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.269 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:14.269 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.269 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.269 [2024-11-20 14:23:53.088766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:14.269 BaseBdev2 00:13:14.269 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.269 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:14.269 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:14.269 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:14.269 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:14.269 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:14.269 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:14.269 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:14.269 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.269 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.269 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.269 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:14.269 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.269 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.269 [ 00:13:14.269 { 00:13:14.269 "name": "BaseBdev2", 00:13:14.269 "aliases": [ 00:13:14.269 "06a6532d-e28a-43f6-8d99-c328e01d7025" 00:13:14.269 ], 00:13:14.269 "product_name": "Malloc disk", 00:13:14.269 "block_size": 512, 00:13:14.269 "num_blocks": 65536, 00:13:14.269 "uuid": "06a6532d-e28a-43f6-8d99-c328e01d7025", 00:13:14.269 "assigned_rate_limits": { 00:13:14.269 "rw_ios_per_sec": 0, 00:13:14.269 "rw_mbytes_per_sec": 0, 00:13:14.269 "r_mbytes_per_sec": 0, 00:13:14.269 "w_mbytes_per_sec": 0 00:13:14.269 }, 00:13:14.269 "claimed": true, 00:13:14.269 "claim_type": "exclusive_write", 00:13:14.269 "zoned": false, 00:13:14.269 "supported_io_types": { 00:13:14.269 "read": true, 00:13:14.269 "write": true, 00:13:14.269 "unmap": true, 00:13:14.269 "flush": true, 00:13:14.269 "reset": true, 00:13:14.269 "nvme_admin": false, 00:13:14.269 "nvme_io": false, 00:13:14.269 "nvme_io_md": false, 00:13:14.269 "write_zeroes": true, 00:13:14.269 "zcopy": true, 00:13:14.269 "get_zone_info": false, 00:13:14.269 "zone_management": false, 00:13:14.269 "zone_append": false, 00:13:14.269 "compare": false, 00:13:14.269 "compare_and_write": false, 00:13:14.269 "abort": true, 00:13:14.269 "seek_hole": false, 00:13:14.269 "seek_data": false, 00:13:14.269 "copy": true, 00:13:14.269 "nvme_iov_md": false 00:13:14.269 }, 00:13:14.269 "memory_domains": [ 00:13:14.269 { 00:13:14.269 "dma_device_id": "system", 00:13:14.269 "dma_device_type": 1 00:13:14.269 }, 00:13:14.269 { 00:13:14.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.269 "dma_device_type": 2 00:13:14.269 } 00:13:14.269 ], 00:13:14.270 "driver_specific": {} 00:13:14.270 } 00:13:14.270 ] 00:13:14.270 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.270 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:14.270 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:14.270 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:14.270 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:14.270 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.270 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.270 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.270 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.270 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:14.270 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.270 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.270 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.270 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.270 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.270 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.270 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.270 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.270 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.270 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.270 "name": "Existed_Raid", 00:13:14.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.270 "strip_size_kb": 0, 00:13:14.270 "state": "configuring", 00:13:14.270 "raid_level": "raid1", 00:13:14.270 "superblock": false, 00:13:14.270 "num_base_bdevs": 4, 00:13:14.270 "num_base_bdevs_discovered": 2, 00:13:14.270 "num_base_bdevs_operational": 4, 00:13:14.270 "base_bdevs_list": [ 00:13:14.270 { 00:13:14.270 "name": "BaseBdev1", 00:13:14.270 "uuid": "2ab59404-ba8e-4618-b9bc-6f8e9113aac2", 00:13:14.270 "is_configured": true, 00:13:14.270 "data_offset": 0, 00:13:14.270 "data_size": 65536 00:13:14.270 }, 00:13:14.270 { 00:13:14.270 "name": "BaseBdev2", 00:13:14.270 "uuid": "06a6532d-e28a-43f6-8d99-c328e01d7025", 00:13:14.270 "is_configured": true, 00:13:14.270 "data_offset": 0, 00:13:14.270 "data_size": 65536 00:13:14.270 }, 00:13:14.270 { 00:13:14.270 "name": "BaseBdev3", 00:13:14.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.270 "is_configured": false, 00:13:14.270 "data_offset": 0, 00:13:14.270 "data_size": 0 00:13:14.270 }, 00:13:14.270 { 00:13:14.270 "name": "BaseBdev4", 00:13:14.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.270 "is_configured": false, 00:13:14.270 "data_offset": 0, 00:13:14.270 "data_size": 0 00:13:14.270 } 00:13:14.270 ] 00:13:14.270 }' 00:13:14.270 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.270 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.838 [2024-11-20 14:23:53.693481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:14.838 BaseBdev3 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.838 [ 00:13:14.838 { 00:13:14.838 "name": "BaseBdev3", 00:13:14.838 "aliases": [ 00:13:14.838 "48a82b70-0b2c-4154-a083-add49f2f9a2a" 00:13:14.838 ], 00:13:14.838 "product_name": "Malloc disk", 00:13:14.838 "block_size": 512, 00:13:14.838 "num_blocks": 65536, 00:13:14.838 "uuid": "48a82b70-0b2c-4154-a083-add49f2f9a2a", 00:13:14.838 "assigned_rate_limits": { 00:13:14.838 "rw_ios_per_sec": 0, 00:13:14.838 "rw_mbytes_per_sec": 0, 00:13:14.838 "r_mbytes_per_sec": 0, 00:13:14.838 "w_mbytes_per_sec": 0 00:13:14.838 }, 00:13:14.838 "claimed": true, 00:13:14.838 "claim_type": "exclusive_write", 00:13:14.838 "zoned": false, 00:13:14.838 "supported_io_types": { 00:13:14.838 "read": true, 00:13:14.838 "write": true, 00:13:14.838 "unmap": true, 00:13:14.838 "flush": true, 00:13:14.838 "reset": true, 00:13:14.838 "nvme_admin": false, 00:13:14.838 "nvme_io": false, 00:13:14.838 "nvme_io_md": false, 00:13:14.838 "write_zeroes": true, 00:13:14.838 "zcopy": true, 00:13:14.838 "get_zone_info": false, 00:13:14.838 "zone_management": false, 00:13:14.838 "zone_append": false, 00:13:14.838 "compare": false, 00:13:14.838 "compare_and_write": false, 00:13:14.838 "abort": true, 00:13:14.838 "seek_hole": false, 00:13:14.838 "seek_data": false, 00:13:14.838 "copy": true, 00:13:14.838 "nvme_iov_md": false 00:13:14.838 }, 00:13:14.838 "memory_domains": [ 00:13:14.838 { 00:13:14.838 "dma_device_id": "system", 00:13:14.838 "dma_device_type": 1 00:13:14.838 }, 00:13:14.838 { 00:13:14.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.838 "dma_device_type": 2 00:13:14.838 } 00:13:14.838 ], 00:13:14.838 "driver_specific": {} 00:13:14.838 } 00:13:14.838 ] 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.838 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.838 "name": "Existed_Raid", 00:13:14.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.838 "strip_size_kb": 0, 00:13:14.838 "state": "configuring", 00:13:14.838 "raid_level": "raid1", 00:13:14.838 "superblock": false, 00:13:14.838 "num_base_bdevs": 4, 00:13:14.838 "num_base_bdevs_discovered": 3, 00:13:14.838 "num_base_bdevs_operational": 4, 00:13:14.838 "base_bdevs_list": [ 00:13:14.838 { 00:13:14.838 "name": "BaseBdev1", 00:13:14.838 "uuid": "2ab59404-ba8e-4618-b9bc-6f8e9113aac2", 00:13:14.839 "is_configured": true, 00:13:14.839 "data_offset": 0, 00:13:14.839 "data_size": 65536 00:13:14.839 }, 00:13:14.839 { 00:13:14.839 "name": "BaseBdev2", 00:13:14.839 "uuid": "06a6532d-e28a-43f6-8d99-c328e01d7025", 00:13:14.839 "is_configured": true, 00:13:14.839 "data_offset": 0, 00:13:14.839 "data_size": 65536 00:13:14.839 }, 00:13:14.839 { 00:13:14.839 "name": "BaseBdev3", 00:13:14.839 "uuid": "48a82b70-0b2c-4154-a083-add49f2f9a2a", 00:13:14.839 "is_configured": true, 00:13:14.839 "data_offset": 0, 00:13:14.839 "data_size": 65536 00:13:14.839 }, 00:13:14.839 { 00:13:14.839 "name": "BaseBdev4", 00:13:14.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.839 "is_configured": false, 00:13:14.839 "data_offset": 0, 00:13:14.839 "data_size": 0 00:13:14.839 } 00:13:14.839 ] 00:13:14.839 }' 00:13:14.839 14:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.839 14:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.406 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:15.406 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.406 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.406 [2024-11-20 14:23:54.277783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:15.406 [2024-11-20 14:23:54.277884] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:15.406 [2024-11-20 14:23:54.277905] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:15.406 [2024-11-20 14:23:54.278305] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:15.406 [2024-11-20 14:23:54.278542] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:15.406 [2024-11-20 14:23:54.278564] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:15.406 [2024-11-20 14:23:54.278890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.406 BaseBdev4 00:13:15.406 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.406 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:15.406 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:15.406 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:15.406 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:15.406 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:15.406 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:15.406 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:15.406 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.406 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.406 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.406 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:15.406 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.406 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.406 [ 00:13:15.406 { 00:13:15.406 "name": "BaseBdev4", 00:13:15.406 "aliases": [ 00:13:15.406 "9d267dc1-0fbf-45ad-a87c-08247e60ff6e" 00:13:15.406 ], 00:13:15.406 "product_name": "Malloc disk", 00:13:15.406 "block_size": 512, 00:13:15.406 "num_blocks": 65536, 00:13:15.406 "uuid": "9d267dc1-0fbf-45ad-a87c-08247e60ff6e", 00:13:15.406 "assigned_rate_limits": { 00:13:15.406 "rw_ios_per_sec": 0, 00:13:15.406 "rw_mbytes_per_sec": 0, 00:13:15.406 "r_mbytes_per_sec": 0, 00:13:15.406 "w_mbytes_per_sec": 0 00:13:15.406 }, 00:13:15.406 "claimed": true, 00:13:15.406 "claim_type": "exclusive_write", 00:13:15.406 "zoned": false, 00:13:15.406 "supported_io_types": { 00:13:15.406 "read": true, 00:13:15.406 "write": true, 00:13:15.406 "unmap": true, 00:13:15.406 "flush": true, 00:13:15.406 "reset": true, 00:13:15.406 "nvme_admin": false, 00:13:15.406 "nvme_io": false, 00:13:15.406 "nvme_io_md": false, 00:13:15.406 "write_zeroes": true, 00:13:15.406 "zcopy": true, 00:13:15.406 "get_zone_info": false, 00:13:15.406 "zone_management": false, 00:13:15.406 "zone_append": false, 00:13:15.406 "compare": false, 00:13:15.406 "compare_and_write": false, 00:13:15.406 "abort": true, 00:13:15.406 "seek_hole": false, 00:13:15.406 "seek_data": false, 00:13:15.406 "copy": true, 00:13:15.406 "nvme_iov_md": false 00:13:15.406 }, 00:13:15.406 "memory_domains": [ 00:13:15.406 { 00:13:15.406 "dma_device_id": "system", 00:13:15.406 "dma_device_type": 1 00:13:15.406 }, 00:13:15.406 { 00:13:15.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.406 "dma_device_type": 2 00:13:15.406 } 00:13:15.406 ], 00:13:15.406 "driver_specific": {} 00:13:15.406 } 00:13:15.406 ] 00:13:15.406 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.407 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:15.407 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:15.407 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:15.407 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:15.407 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.407 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.407 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.407 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.407 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.407 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.407 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.407 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.407 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.407 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.407 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.407 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.407 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.407 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.407 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.407 "name": "Existed_Raid", 00:13:15.407 "uuid": "ef40bb14-05d2-48ac-b1e9-e4a95d95806d", 00:13:15.407 "strip_size_kb": 0, 00:13:15.407 "state": "online", 00:13:15.407 "raid_level": "raid1", 00:13:15.407 "superblock": false, 00:13:15.407 "num_base_bdevs": 4, 00:13:15.407 "num_base_bdevs_discovered": 4, 00:13:15.407 "num_base_bdevs_operational": 4, 00:13:15.407 "base_bdevs_list": [ 00:13:15.407 { 00:13:15.407 "name": "BaseBdev1", 00:13:15.407 "uuid": "2ab59404-ba8e-4618-b9bc-6f8e9113aac2", 00:13:15.407 "is_configured": true, 00:13:15.407 "data_offset": 0, 00:13:15.407 "data_size": 65536 00:13:15.407 }, 00:13:15.407 { 00:13:15.407 "name": "BaseBdev2", 00:13:15.407 "uuid": "06a6532d-e28a-43f6-8d99-c328e01d7025", 00:13:15.407 "is_configured": true, 00:13:15.407 "data_offset": 0, 00:13:15.407 "data_size": 65536 00:13:15.407 }, 00:13:15.407 { 00:13:15.407 "name": "BaseBdev3", 00:13:15.407 "uuid": "48a82b70-0b2c-4154-a083-add49f2f9a2a", 00:13:15.407 "is_configured": true, 00:13:15.407 "data_offset": 0, 00:13:15.407 "data_size": 65536 00:13:15.407 }, 00:13:15.407 { 00:13:15.407 "name": "BaseBdev4", 00:13:15.407 "uuid": "9d267dc1-0fbf-45ad-a87c-08247e60ff6e", 00:13:15.407 "is_configured": true, 00:13:15.407 "data_offset": 0, 00:13:15.407 "data_size": 65536 00:13:15.407 } 00:13:15.407 ] 00:13:15.407 }' 00:13:15.407 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.407 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.973 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:15.973 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:15.973 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:15.973 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:15.973 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:15.973 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:15.973 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:15.973 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:15.973 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.973 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.973 [2024-11-20 14:23:54.846446] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:15.973 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.973 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:15.973 "name": "Existed_Raid", 00:13:15.973 "aliases": [ 00:13:15.973 "ef40bb14-05d2-48ac-b1e9-e4a95d95806d" 00:13:15.973 ], 00:13:15.973 "product_name": "Raid Volume", 00:13:15.973 "block_size": 512, 00:13:15.973 "num_blocks": 65536, 00:13:15.973 "uuid": "ef40bb14-05d2-48ac-b1e9-e4a95d95806d", 00:13:15.973 "assigned_rate_limits": { 00:13:15.973 "rw_ios_per_sec": 0, 00:13:15.973 "rw_mbytes_per_sec": 0, 00:13:15.973 "r_mbytes_per_sec": 0, 00:13:15.974 "w_mbytes_per_sec": 0 00:13:15.974 }, 00:13:15.974 "claimed": false, 00:13:15.974 "zoned": false, 00:13:15.974 "supported_io_types": { 00:13:15.974 "read": true, 00:13:15.974 "write": true, 00:13:15.974 "unmap": false, 00:13:15.974 "flush": false, 00:13:15.974 "reset": true, 00:13:15.974 "nvme_admin": false, 00:13:15.974 "nvme_io": false, 00:13:15.974 "nvme_io_md": false, 00:13:15.974 "write_zeroes": true, 00:13:15.974 "zcopy": false, 00:13:15.974 "get_zone_info": false, 00:13:15.974 "zone_management": false, 00:13:15.974 "zone_append": false, 00:13:15.974 "compare": false, 00:13:15.974 "compare_and_write": false, 00:13:15.974 "abort": false, 00:13:15.974 "seek_hole": false, 00:13:15.974 "seek_data": false, 00:13:15.974 "copy": false, 00:13:15.974 "nvme_iov_md": false 00:13:15.974 }, 00:13:15.974 "memory_domains": [ 00:13:15.974 { 00:13:15.974 "dma_device_id": "system", 00:13:15.974 "dma_device_type": 1 00:13:15.974 }, 00:13:15.974 { 00:13:15.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.974 "dma_device_type": 2 00:13:15.974 }, 00:13:15.974 { 00:13:15.974 "dma_device_id": "system", 00:13:15.974 "dma_device_type": 1 00:13:15.974 }, 00:13:15.974 { 00:13:15.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.974 "dma_device_type": 2 00:13:15.974 }, 00:13:15.974 { 00:13:15.974 "dma_device_id": "system", 00:13:15.974 "dma_device_type": 1 00:13:15.974 }, 00:13:15.974 { 00:13:15.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.974 "dma_device_type": 2 00:13:15.974 }, 00:13:15.974 { 00:13:15.974 "dma_device_id": "system", 00:13:15.974 "dma_device_type": 1 00:13:15.974 }, 00:13:15.974 { 00:13:15.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.974 "dma_device_type": 2 00:13:15.974 } 00:13:15.974 ], 00:13:15.974 "driver_specific": { 00:13:15.974 "raid": { 00:13:15.974 "uuid": "ef40bb14-05d2-48ac-b1e9-e4a95d95806d", 00:13:15.974 "strip_size_kb": 0, 00:13:15.974 "state": "online", 00:13:15.974 "raid_level": "raid1", 00:13:15.974 "superblock": false, 00:13:15.974 "num_base_bdevs": 4, 00:13:15.974 "num_base_bdevs_discovered": 4, 00:13:15.974 "num_base_bdevs_operational": 4, 00:13:15.974 "base_bdevs_list": [ 00:13:15.974 { 00:13:15.974 "name": "BaseBdev1", 00:13:15.974 "uuid": "2ab59404-ba8e-4618-b9bc-6f8e9113aac2", 00:13:15.974 "is_configured": true, 00:13:15.974 "data_offset": 0, 00:13:15.974 "data_size": 65536 00:13:15.974 }, 00:13:15.974 { 00:13:15.974 "name": "BaseBdev2", 00:13:15.974 "uuid": "06a6532d-e28a-43f6-8d99-c328e01d7025", 00:13:15.974 "is_configured": true, 00:13:15.974 "data_offset": 0, 00:13:15.974 "data_size": 65536 00:13:15.974 }, 00:13:15.974 { 00:13:15.974 "name": "BaseBdev3", 00:13:15.974 "uuid": "48a82b70-0b2c-4154-a083-add49f2f9a2a", 00:13:15.974 "is_configured": true, 00:13:15.974 "data_offset": 0, 00:13:15.974 "data_size": 65536 00:13:15.974 }, 00:13:15.974 { 00:13:15.974 "name": "BaseBdev4", 00:13:15.974 "uuid": "9d267dc1-0fbf-45ad-a87c-08247e60ff6e", 00:13:15.974 "is_configured": true, 00:13:15.974 "data_offset": 0, 00:13:15.974 "data_size": 65536 00:13:15.974 } 00:13:15.974 ] 00:13:15.974 } 00:13:15.974 } 00:13:15.974 }' 00:13:15.974 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:15.974 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:15.974 BaseBdev2 00:13:15.974 BaseBdev3 00:13:15.974 BaseBdev4' 00:13:15.974 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.233 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:16.233 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.233 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:16.233 14:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.233 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.233 14:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.233 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.492 [2024-11-20 14:23:55.218219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.492 "name": "Existed_Raid", 00:13:16.492 "uuid": "ef40bb14-05d2-48ac-b1e9-e4a95d95806d", 00:13:16.492 "strip_size_kb": 0, 00:13:16.492 "state": "online", 00:13:16.492 "raid_level": "raid1", 00:13:16.492 "superblock": false, 00:13:16.492 "num_base_bdevs": 4, 00:13:16.492 "num_base_bdevs_discovered": 3, 00:13:16.492 "num_base_bdevs_operational": 3, 00:13:16.492 "base_bdevs_list": [ 00:13:16.492 { 00:13:16.492 "name": null, 00:13:16.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.492 "is_configured": false, 00:13:16.492 "data_offset": 0, 00:13:16.492 "data_size": 65536 00:13:16.492 }, 00:13:16.492 { 00:13:16.492 "name": "BaseBdev2", 00:13:16.492 "uuid": "06a6532d-e28a-43f6-8d99-c328e01d7025", 00:13:16.492 "is_configured": true, 00:13:16.492 "data_offset": 0, 00:13:16.492 "data_size": 65536 00:13:16.492 }, 00:13:16.492 { 00:13:16.492 "name": "BaseBdev3", 00:13:16.492 "uuid": "48a82b70-0b2c-4154-a083-add49f2f9a2a", 00:13:16.492 "is_configured": true, 00:13:16.492 "data_offset": 0, 00:13:16.492 "data_size": 65536 00:13:16.492 }, 00:13:16.492 { 00:13:16.492 "name": "BaseBdev4", 00:13:16.492 "uuid": "9d267dc1-0fbf-45ad-a87c-08247e60ff6e", 00:13:16.492 "is_configured": true, 00:13:16.492 "data_offset": 0, 00:13:16.492 "data_size": 65536 00:13:16.492 } 00:13:16.492 ] 00:13:16.492 }' 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.492 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.060 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:17.060 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:17.060 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.060 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.060 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.060 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:17.060 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.060 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:17.060 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:17.060 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:17.060 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.060 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.060 [2024-11-20 14:23:55.899173] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:17.060 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.060 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:17.060 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:17.060 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.060 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.060 14:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.060 14:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:17.060 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.320 [2024-11-20 14:23:56.047936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.320 [2024-11-20 14:23:56.196928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:17.320 [2024-11-20 14:23:56.197064] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:17.320 [2024-11-20 14:23:56.286695] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:17.320 [2024-11-20 14:23:56.286982] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:17.320 [2024-11-20 14:23:56.287044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.320 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.580 BaseBdev2 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.580 [ 00:13:17.580 { 00:13:17.580 "name": "BaseBdev2", 00:13:17.580 "aliases": [ 00:13:17.580 "047ddfcd-6397-4ced-9dc4-5ec453cc8079" 00:13:17.580 ], 00:13:17.580 "product_name": "Malloc disk", 00:13:17.580 "block_size": 512, 00:13:17.580 "num_blocks": 65536, 00:13:17.580 "uuid": "047ddfcd-6397-4ced-9dc4-5ec453cc8079", 00:13:17.580 "assigned_rate_limits": { 00:13:17.580 "rw_ios_per_sec": 0, 00:13:17.580 "rw_mbytes_per_sec": 0, 00:13:17.580 "r_mbytes_per_sec": 0, 00:13:17.580 "w_mbytes_per_sec": 0 00:13:17.580 }, 00:13:17.580 "claimed": false, 00:13:17.580 "zoned": false, 00:13:17.580 "supported_io_types": { 00:13:17.580 "read": true, 00:13:17.580 "write": true, 00:13:17.580 "unmap": true, 00:13:17.580 "flush": true, 00:13:17.580 "reset": true, 00:13:17.580 "nvme_admin": false, 00:13:17.580 "nvme_io": false, 00:13:17.580 "nvme_io_md": false, 00:13:17.580 "write_zeroes": true, 00:13:17.580 "zcopy": true, 00:13:17.580 "get_zone_info": false, 00:13:17.580 "zone_management": false, 00:13:17.580 "zone_append": false, 00:13:17.580 "compare": false, 00:13:17.580 "compare_and_write": false, 00:13:17.580 "abort": true, 00:13:17.580 "seek_hole": false, 00:13:17.580 "seek_data": false, 00:13:17.580 "copy": true, 00:13:17.580 "nvme_iov_md": false 00:13:17.580 }, 00:13:17.580 "memory_domains": [ 00:13:17.580 { 00:13:17.580 "dma_device_id": "system", 00:13:17.580 "dma_device_type": 1 00:13:17.580 }, 00:13:17.580 { 00:13:17.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.580 "dma_device_type": 2 00:13:17.580 } 00:13:17.580 ], 00:13:17.580 "driver_specific": {} 00:13:17.580 } 00:13:17.580 ] 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.580 BaseBdev3 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.580 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:17.581 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.581 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.581 [ 00:13:17.581 { 00:13:17.581 "name": "BaseBdev3", 00:13:17.581 "aliases": [ 00:13:17.581 "efd7231f-ffc8-408f-a6b2-360fab3f658a" 00:13:17.581 ], 00:13:17.581 "product_name": "Malloc disk", 00:13:17.581 "block_size": 512, 00:13:17.581 "num_blocks": 65536, 00:13:17.581 "uuid": "efd7231f-ffc8-408f-a6b2-360fab3f658a", 00:13:17.581 "assigned_rate_limits": { 00:13:17.581 "rw_ios_per_sec": 0, 00:13:17.581 "rw_mbytes_per_sec": 0, 00:13:17.581 "r_mbytes_per_sec": 0, 00:13:17.581 "w_mbytes_per_sec": 0 00:13:17.581 }, 00:13:17.581 "claimed": false, 00:13:17.581 "zoned": false, 00:13:17.581 "supported_io_types": { 00:13:17.581 "read": true, 00:13:17.581 "write": true, 00:13:17.581 "unmap": true, 00:13:17.581 "flush": true, 00:13:17.581 "reset": true, 00:13:17.581 "nvme_admin": false, 00:13:17.581 "nvme_io": false, 00:13:17.581 "nvme_io_md": false, 00:13:17.581 "write_zeroes": true, 00:13:17.581 "zcopy": true, 00:13:17.581 "get_zone_info": false, 00:13:17.581 "zone_management": false, 00:13:17.581 "zone_append": false, 00:13:17.581 "compare": false, 00:13:17.581 "compare_and_write": false, 00:13:17.581 "abort": true, 00:13:17.581 "seek_hole": false, 00:13:17.581 "seek_data": false, 00:13:17.581 "copy": true, 00:13:17.581 "nvme_iov_md": false 00:13:17.581 }, 00:13:17.581 "memory_domains": [ 00:13:17.581 { 00:13:17.581 "dma_device_id": "system", 00:13:17.581 "dma_device_type": 1 00:13:17.581 }, 00:13:17.581 { 00:13:17.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.581 "dma_device_type": 2 00:13:17.581 } 00:13:17.581 ], 00:13:17.581 "driver_specific": {} 00:13:17.581 } 00:13:17.581 ] 00:13:17.581 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.581 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:17.581 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:17.581 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:17.581 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:17.581 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.581 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.581 BaseBdev4 00:13:17.581 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.581 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:17.581 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:17.581 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:17.581 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:17.581 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:17.581 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:17.581 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:17.581 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.581 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.581 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.581 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:17.581 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.581 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.841 [ 00:13:17.841 { 00:13:17.841 "name": "BaseBdev4", 00:13:17.841 "aliases": [ 00:13:17.841 "25bd3a6f-29c0-4350-9b6c-d1e2aed11388" 00:13:17.841 ], 00:13:17.841 "product_name": "Malloc disk", 00:13:17.841 "block_size": 512, 00:13:17.841 "num_blocks": 65536, 00:13:17.841 "uuid": "25bd3a6f-29c0-4350-9b6c-d1e2aed11388", 00:13:17.841 "assigned_rate_limits": { 00:13:17.841 "rw_ios_per_sec": 0, 00:13:17.841 "rw_mbytes_per_sec": 0, 00:13:17.841 "r_mbytes_per_sec": 0, 00:13:17.841 "w_mbytes_per_sec": 0 00:13:17.841 }, 00:13:17.841 "claimed": false, 00:13:17.841 "zoned": false, 00:13:17.841 "supported_io_types": { 00:13:17.841 "read": true, 00:13:17.841 "write": true, 00:13:17.841 "unmap": true, 00:13:17.841 "flush": true, 00:13:17.841 "reset": true, 00:13:17.841 "nvme_admin": false, 00:13:17.841 "nvme_io": false, 00:13:17.841 "nvme_io_md": false, 00:13:17.841 "write_zeroes": true, 00:13:17.841 "zcopy": true, 00:13:17.841 "get_zone_info": false, 00:13:17.841 "zone_management": false, 00:13:17.841 "zone_append": false, 00:13:17.841 "compare": false, 00:13:17.841 "compare_and_write": false, 00:13:17.841 "abort": true, 00:13:17.841 "seek_hole": false, 00:13:17.841 "seek_data": false, 00:13:17.841 "copy": true, 00:13:17.841 "nvme_iov_md": false 00:13:17.841 }, 00:13:17.841 "memory_domains": [ 00:13:17.841 { 00:13:17.841 "dma_device_id": "system", 00:13:17.841 "dma_device_type": 1 00:13:17.841 }, 00:13:17.841 { 00:13:17.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.841 "dma_device_type": 2 00:13:17.841 } 00:13:17.841 ], 00:13:17.841 "driver_specific": {} 00:13:17.841 } 00:13:17.841 ] 00:13:17.841 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.841 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:17.841 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:17.841 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:17.841 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:17.841 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.841 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.841 [2024-11-20 14:23:56.584925] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:17.841 [2024-11-20 14:23:56.585003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:17.841 [2024-11-20 14:23:56.585036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.841 [2024-11-20 14:23:56.587507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:17.841 [2024-11-20 14:23:56.587575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:17.841 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.841 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:17.841 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.841 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.841 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.841 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.841 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.841 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.841 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.841 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.841 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.841 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.841 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.841 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.841 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.841 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.841 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.841 "name": "Existed_Raid", 00:13:17.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.842 "strip_size_kb": 0, 00:13:17.842 "state": "configuring", 00:13:17.842 "raid_level": "raid1", 00:13:17.842 "superblock": false, 00:13:17.842 "num_base_bdevs": 4, 00:13:17.842 "num_base_bdevs_discovered": 3, 00:13:17.842 "num_base_bdevs_operational": 4, 00:13:17.842 "base_bdevs_list": [ 00:13:17.842 { 00:13:17.842 "name": "BaseBdev1", 00:13:17.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.842 "is_configured": false, 00:13:17.842 "data_offset": 0, 00:13:17.842 "data_size": 0 00:13:17.842 }, 00:13:17.842 { 00:13:17.842 "name": "BaseBdev2", 00:13:17.842 "uuid": "047ddfcd-6397-4ced-9dc4-5ec453cc8079", 00:13:17.842 "is_configured": true, 00:13:17.842 "data_offset": 0, 00:13:17.842 "data_size": 65536 00:13:17.842 }, 00:13:17.842 { 00:13:17.842 "name": "BaseBdev3", 00:13:17.842 "uuid": "efd7231f-ffc8-408f-a6b2-360fab3f658a", 00:13:17.842 "is_configured": true, 00:13:17.842 "data_offset": 0, 00:13:17.842 "data_size": 65536 00:13:17.842 }, 00:13:17.842 { 00:13:17.842 "name": "BaseBdev4", 00:13:17.842 "uuid": "25bd3a6f-29c0-4350-9b6c-d1e2aed11388", 00:13:17.842 "is_configured": true, 00:13:17.842 "data_offset": 0, 00:13:17.842 "data_size": 65536 00:13:17.842 } 00:13:17.842 ] 00:13:17.842 }' 00:13:17.842 14:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.842 14:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.445 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:18.445 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.445 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.445 [2024-11-20 14:23:57.113083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:18.445 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.445 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:18.445 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.445 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.445 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.445 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.445 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:18.445 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.445 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.445 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.445 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.445 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.445 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.445 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.445 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.445 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.445 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.445 "name": "Existed_Raid", 00:13:18.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.445 "strip_size_kb": 0, 00:13:18.445 "state": "configuring", 00:13:18.445 "raid_level": "raid1", 00:13:18.445 "superblock": false, 00:13:18.445 "num_base_bdevs": 4, 00:13:18.445 "num_base_bdevs_discovered": 2, 00:13:18.445 "num_base_bdevs_operational": 4, 00:13:18.445 "base_bdevs_list": [ 00:13:18.445 { 00:13:18.445 "name": "BaseBdev1", 00:13:18.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.445 "is_configured": false, 00:13:18.445 "data_offset": 0, 00:13:18.445 "data_size": 0 00:13:18.445 }, 00:13:18.445 { 00:13:18.445 "name": null, 00:13:18.445 "uuid": "047ddfcd-6397-4ced-9dc4-5ec453cc8079", 00:13:18.445 "is_configured": false, 00:13:18.445 "data_offset": 0, 00:13:18.445 "data_size": 65536 00:13:18.445 }, 00:13:18.445 { 00:13:18.445 "name": "BaseBdev3", 00:13:18.445 "uuid": "efd7231f-ffc8-408f-a6b2-360fab3f658a", 00:13:18.445 "is_configured": true, 00:13:18.445 "data_offset": 0, 00:13:18.445 "data_size": 65536 00:13:18.445 }, 00:13:18.445 { 00:13:18.445 "name": "BaseBdev4", 00:13:18.445 "uuid": "25bd3a6f-29c0-4350-9b6c-d1e2aed11388", 00:13:18.445 "is_configured": true, 00:13:18.445 "data_offset": 0, 00:13:18.445 "data_size": 65536 00:13:18.445 } 00:13:18.445 ] 00:13:18.445 }' 00:13:18.445 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.445 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.705 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.705 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:18.705 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.705 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.705 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.705 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:18.705 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:18.705 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.705 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.964 [2024-11-20 14:23:57.707652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:18.964 BaseBdev1 00:13:18.964 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.964 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:18.964 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:18.964 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:18.964 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:18.964 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:18.964 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:18.964 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:18.964 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.964 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.964 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.964 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:18.964 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.964 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.964 [ 00:13:18.964 { 00:13:18.964 "name": "BaseBdev1", 00:13:18.964 "aliases": [ 00:13:18.964 "b6110a17-ac90-4cf9-8ca8-dd9c9ccdd21b" 00:13:18.964 ], 00:13:18.964 "product_name": "Malloc disk", 00:13:18.964 "block_size": 512, 00:13:18.964 "num_blocks": 65536, 00:13:18.964 "uuid": "b6110a17-ac90-4cf9-8ca8-dd9c9ccdd21b", 00:13:18.964 "assigned_rate_limits": { 00:13:18.964 "rw_ios_per_sec": 0, 00:13:18.964 "rw_mbytes_per_sec": 0, 00:13:18.965 "r_mbytes_per_sec": 0, 00:13:18.965 "w_mbytes_per_sec": 0 00:13:18.965 }, 00:13:18.965 "claimed": true, 00:13:18.965 "claim_type": "exclusive_write", 00:13:18.965 "zoned": false, 00:13:18.965 "supported_io_types": { 00:13:18.965 "read": true, 00:13:18.965 "write": true, 00:13:18.965 "unmap": true, 00:13:18.965 "flush": true, 00:13:18.965 "reset": true, 00:13:18.965 "nvme_admin": false, 00:13:18.965 "nvme_io": false, 00:13:18.965 "nvme_io_md": false, 00:13:18.965 "write_zeroes": true, 00:13:18.965 "zcopy": true, 00:13:18.965 "get_zone_info": false, 00:13:18.965 "zone_management": false, 00:13:18.965 "zone_append": false, 00:13:18.965 "compare": false, 00:13:18.965 "compare_and_write": false, 00:13:18.965 "abort": true, 00:13:18.965 "seek_hole": false, 00:13:18.965 "seek_data": false, 00:13:18.965 "copy": true, 00:13:18.965 "nvme_iov_md": false 00:13:18.965 }, 00:13:18.965 "memory_domains": [ 00:13:18.965 { 00:13:18.965 "dma_device_id": "system", 00:13:18.965 "dma_device_type": 1 00:13:18.965 }, 00:13:18.965 { 00:13:18.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.965 "dma_device_type": 2 00:13:18.965 } 00:13:18.965 ], 00:13:18.965 "driver_specific": {} 00:13:18.965 } 00:13:18.965 ] 00:13:18.965 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.965 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:18.965 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:18.965 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.965 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.965 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.965 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.965 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:18.965 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.965 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.965 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.965 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.965 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.965 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.965 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.965 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.965 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.965 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.965 "name": "Existed_Raid", 00:13:18.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.965 "strip_size_kb": 0, 00:13:18.965 "state": "configuring", 00:13:18.965 "raid_level": "raid1", 00:13:18.965 "superblock": false, 00:13:18.965 "num_base_bdevs": 4, 00:13:18.965 "num_base_bdevs_discovered": 3, 00:13:18.965 "num_base_bdevs_operational": 4, 00:13:18.965 "base_bdevs_list": [ 00:13:18.965 { 00:13:18.965 "name": "BaseBdev1", 00:13:18.965 "uuid": "b6110a17-ac90-4cf9-8ca8-dd9c9ccdd21b", 00:13:18.965 "is_configured": true, 00:13:18.965 "data_offset": 0, 00:13:18.965 "data_size": 65536 00:13:18.965 }, 00:13:18.965 { 00:13:18.965 "name": null, 00:13:18.965 "uuid": "047ddfcd-6397-4ced-9dc4-5ec453cc8079", 00:13:18.965 "is_configured": false, 00:13:18.965 "data_offset": 0, 00:13:18.965 "data_size": 65536 00:13:18.965 }, 00:13:18.965 { 00:13:18.965 "name": "BaseBdev3", 00:13:18.965 "uuid": "efd7231f-ffc8-408f-a6b2-360fab3f658a", 00:13:18.965 "is_configured": true, 00:13:18.965 "data_offset": 0, 00:13:18.965 "data_size": 65536 00:13:18.965 }, 00:13:18.965 { 00:13:18.965 "name": "BaseBdev4", 00:13:18.965 "uuid": "25bd3a6f-29c0-4350-9b6c-d1e2aed11388", 00:13:18.965 "is_configured": true, 00:13:18.965 "data_offset": 0, 00:13:18.965 "data_size": 65536 00:13:18.965 } 00:13:18.965 ] 00:13:18.965 }' 00:13:18.965 14:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.965 14:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.534 [2024-11-20 14:23:58.331877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.534 "name": "Existed_Raid", 00:13:19.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.534 "strip_size_kb": 0, 00:13:19.534 "state": "configuring", 00:13:19.534 "raid_level": "raid1", 00:13:19.534 "superblock": false, 00:13:19.534 "num_base_bdevs": 4, 00:13:19.534 "num_base_bdevs_discovered": 2, 00:13:19.534 "num_base_bdevs_operational": 4, 00:13:19.534 "base_bdevs_list": [ 00:13:19.534 { 00:13:19.534 "name": "BaseBdev1", 00:13:19.534 "uuid": "b6110a17-ac90-4cf9-8ca8-dd9c9ccdd21b", 00:13:19.534 "is_configured": true, 00:13:19.534 "data_offset": 0, 00:13:19.534 "data_size": 65536 00:13:19.534 }, 00:13:19.534 { 00:13:19.534 "name": null, 00:13:19.534 "uuid": "047ddfcd-6397-4ced-9dc4-5ec453cc8079", 00:13:19.534 "is_configured": false, 00:13:19.534 "data_offset": 0, 00:13:19.534 "data_size": 65536 00:13:19.534 }, 00:13:19.534 { 00:13:19.534 "name": null, 00:13:19.534 "uuid": "efd7231f-ffc8-408f-a6b2-360fab3f658a", 00:13:19.534 "is_configured": false, 00:13:19.534 "data_offset": 0, 00:13:19.534 "data_size": 65536 00:13:19.534 }, 00:13:19.534 { 00:13:19.534 "name": "BaseBdev4", 00:13:19.534 "uuid": "25bd3a6f-29c0-4350-9b6c-d1e2aed11388", 00:13:19.534 "is_configured": true, 00:13:19.534 "data_offset": 0, 00:13:19.534 "data_size": 65536 00:13:19.534 } 00:13:19.534 ] 00:13:19.534 }' 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.534 14:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.103 [2024-11-20 14:23:58.940111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.103 "name": "Existed_Raid", 00:13:20.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.103 "strip_size_kb": 0, 00:13:20.103 "state": "configuring", 00:13:20.103 "raid_level": "raid1", 00:13:20.103 "superblock": false, 00:13:20.103 "num_base_bdevs": 4, 00:13:20.103 "num_base_bdevs_discovered": 3, 00:13:20.103 "num_base_bdevs_operational": 4, 00:13:20.103 "base_bdevs_list": [ 00:13:20.103 { 00:13:20.103 "name": "BaseBdev1", 00:13:20.103 "uuid": "b6110a17-ac90-4cf9-8ca8-dd9c9ccdd21b", 00:13:20.103 "is_configured": true, 00:13:20.103 "data_offset": 0, 00:13:20.103 "data_size": 65536 00:13:20.103 }, 00:13:20.103 { 00:13:20.103 "name": null, 00:13:20.103 "uuid": "047ddfcd-6397-4ced-9dc4-5ec453cc8079", 00:13:20.103 "is_configured": false, 00:13:20.103 "data_offset": 0, 00:13:20.103 "data_size": 65536 00:13:20.103 }, 00:13:20.103 { 00:13:20.103 "name": "BaseBdev3", 00:13:20.103 "uuid": "efd7231f-ffc8-408f-a6b2-360fab3f658a", 00:13:20.103 "is_configured": true, 00:13:20.103 "data_offset": 0, 00:13:20.103 "data_size": 65536 00:13:20.103 }, 00:13:20.103 { 00:13:20.103 "name": "BaseBdev4", 00:13:20.103 "uuid": "25bd3a6f-29c0-4350-9b6c-d1e2aed11388", 00:13:20.103 "is_configured": true, 00:13:20.103 "data_offset": 0, 00:13:20.103 "data_size": 65536 00:13:20.103 } 00:13:20.103 ] 00:13:20.103 }' 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.103 14:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.669 [2024-11-20 14:23:59.528352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.669 14:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.927 14:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.927 "name": "Existed_Raid", 00:13:20.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.927 "strip_size_kb": 0, 00:13:20.927 "state": "configuring", 00:13:20.927 "raid_level": "raid1", 00:13:20.927 "superblock": false, 00:13:20.927 "num_base_bdevs": 4, 00:13:20.927 "num_base_bdevs_discovered": 2, 00:13:20.927 "num_base_bdevs_operational": 4, 00:13:20.927 "base_bdevs_list": [ 00:13:20.927 { 00:13:20.927 "name": null, 00:13:20.927 "uuid": "b6110a17-ac90-4cf9-8ca8-dd9c9ccdd21b", 00:13:20.927 "is_configured": false, 00:13:20.927 "data_offset": 0, 00:13:20.927 "data_size": 65536 00:13:20.927 }, 00:13:20.927 { 00:13:20.927 "name": null, 00:13:20.927 "uuid": "047ddfcd-6397-4ced-9dc4-5ec453cc8079", 00:13:20.927 "is_configured": false, 00:13:20.927 "data_offset": 0, 00:13:20.927 "data_size": 65536 00:13:20.927 }, 00:13:20.927 { 00:13:20.927 "name": "BaseBdev3", 00:13:20.927 "uuid": "efd7231f-ffc8-408f-a6b2-360fab3f658a", 00:13:20.927 "is_configured": true, 00:13:20.927 "data_offset": 0, 00:13:20.927 "data_size": 65536 00:13:20.927 }, 00:13:20.927 { 00:13:20.927 "name": "BaseBdev4", 00:13:20.927 "uuid": "25bd3a6f-29c0-4350-9b6c-d1e2aed11388", 00:13:20.927 "is_configured": true, 00:13:20.927 "data_offset": 0, 00:13:20.927 "data_size": 65536 00:13:20.927 } 00:13:20.927 ] 00:13:20.927 }' 00:13:20.927 14:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.927 14:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.185 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.185 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.185 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.185 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:21.444 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.444 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:21.444 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:21.444 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.444 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.444 [2024-11-20 14:24:00.198958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:21.444 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.444 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:21.444 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.444 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.444 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.444 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.444 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:21.444 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.444 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.444 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.444 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.444 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.444 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.444 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.444 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.444 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.444 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.444 "name": "Existed_Raid", 00:13:21.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.444 "strip_size_kb": 0, 00:13:21.444 "state": "configuring", 00:13:21.444 "raid_level": "raid1", 00:13:21.444 "superblock": false, 00:13:21.444 "num_base_bdevs": 4, 00:13:21.444 "num_base_bdevs_discovered": 3, 00:13:21.444 "num_base_bdevs_operational": 4, 00:13:21.444 "base_bdevs_list": [ 00:13:21.444 { 00:13:21.444 "name": null, 00:13:21.444 "uuid": "b6110a17-ac90-4cf9-8ca8-dd9c9ccdd21b", 00:13:21.444 "is_configured": false, 00:13:21.444 "data_offset": 0, 00:13:21.444 "data_size": 65536 00:13:21.444 }, 00:13:21.444 { 00:13:21.444 "name": "BaseBdev2", 00:13:21.444 "uuid": "047ddfcd-6397-4ced-9dc4-5ec453cc8079", 00:13:21.444 "is_configured": true, 00:13:21.444 "data_offset": 0, 00:13:21.444 "data_size": 65536 00:13:21.444 }, 00:13:21.444 { 00:13:21.444 "name": "BaseBdev3", 00:13:21.444 "uuid": "efd7231f-ffc8-408f-a6b2-360fab3f658a", 00:13:21.444 "is_configured": true, 00:13:21.444 "data_offset": 0, 00:13:21.444 "data_size": 65536 00:13:21.444 }, 00:13:21.444 { 00:13:21.444 "name": "BaseBdev4", 00:13:21.444 "uuid": "25bd3a6f-29c0-4350-9b6c-d1e2aed11388", 00:13:21.444 "is_configured": true, 00:13:21.444 "data_offset": 0, 00:13:21.444 "data_size": 65536 00:13:21.444 } 00:13:21.444 ] 00:13:21.444 }' 00:13:21.444 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.444 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b6110a17-ac90-4cf9-8ca8-dd9c9ccdd21b 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.012 [2024-11-20 14:24:00.886748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:22.012 [2024-11-20 14:24:00.886814] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:22.012 [2024-11-20 14:24:00.886831] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:22.012 [2024-11-20 14:24:00.887219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:22.012 [2024-11-20 14:24:00.887433] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:22.012 [2024-11-20 14:24:00.887450] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:22.012 [2024-11-20 14:24:00.887753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.012 NewBaseBdev 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.012 [ 00:13:22.012 { 00:13:22.012 "name": "NewBaseBdev", 00:13:22.012 "aliases": [ 00:13:22.012 "b6110a17-ac90-4cf9-8ca8-dd9c9ccdd21b" 00:13:22.012 ], 00:13:22.012 "product_name": "Malloc disk", 00:13:22.012 "block_size": 512, 00:13:22.012 "num_blocks": 65536, 00:13:22.012 "uuid": "b6110a17-ac90-4cf9-8ca8-dd9c9ccdd21b", 00:13:22.012 "assigned_rate_limits": { 00:13:22.012 "rw_ios_per_sec": 0, 00:13:22.012 "rw_mbytes_per_sec": 0, 00:13:22.012 "r_mbytes_per_sec": 0, 00:13:22.012 "w_mbytes_per_sec": 0 00:13:22.012 }, 00:13:22.012 "claimed": true, 00:13:22.012 "claim_type": "exclusive_write", 00:13:22.012 "zoned": false, 00:13:22.012 "supported_io_types": { 00:13:22.012 "read": true, 00:13:22.012 "write": true, 00:13:22.012 "unmap": true, 00:13:22.012 "flush": true, 00:13:22.012 "reset": true, 00:13:22.012 "nvme_admin": false, 00:13:22.012 "nvme_io": false, 00:13:22.012 "nvme_io_md": false, 00:13:22.012 "write_zeroes": true, 00:13:22.012 "zcopy": true, 00:13:22.012 "get_zone_info": false, 00:13:22.012 "zone_management": false, 00:13:22.012 "zone_append": false, 00:13:22.012 "compare": false, 00:13:22.012 "compare_and_write": false, 00:13:22.012 "abort": true, 00:13:22.012 "seek_hole": false, 00:13:22.012 "seek_data": false, 00:13:22.012 "copy": true, 00:13:22.012 "nvme_iov_md": false 00:13:22.012 }, 00:13:22.012 "memory_domains": [ 00:13:22.012 { 00:13:22.012 "dma_device_id": "system", 00:13:22.012 "dma_device_type": 1 00:13:22.012 }, 00:13:22.012 { 00:13:22.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.012 "dma_device_type": 2 00:13:22.012 } 00:13:22.012 ], 00:13:22.012 "driver_specific": {} 00:13:22.012 } 00:13:22.012 ] 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.012 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.012 "name": "Existed_Raid", 00:13:22.012 "uuid": "d193a35e-8021-4593-ab6d-7b0e5ae4d338", 00:13:22.012 "strip_size_kb": 0, 00:13:22.012 "state": "online", 00:13:22.012 "raid_level": "raid1", 00:13:22.012 "superblock": false, 00:13:22.012 "num_base_bdevs": 4, 00:13:22.012 "num_base_bdevs_discovered": 4, 00:13:22.013 "num_base_bdevs_operational": 4, 00:13:22.013 "base_bdevs_list": [ 00:13:22.013 { 00:13:22.013 "name": "NewBaseBdev", 00:13:22.013 "uuid": "b6110a17-ac90-4cf9-8ca8-dd9c9ccdd21b", 00:13:22.013 "is_configured": true, 00:13:22.013 "data_offset": 0, 00:13:22.013 "data_size": 65536 00:13:22.013 }, 00:13:22.013 { 00:13:22.013 "name": "BaseBdev2", 00:13:22.013 "uuid": "047ddfcd-6397-4ced-9dc4-5ec453cc8079", 00:13:22.013 "is_configured": true, 00:13:22.013 "data_offset": 0, 00:13:22.013 "data_size": 65536 00:13:22.013 }, 00:13:22.013 { 00:13:22.013 "name": "BaseBdev3", 00:13:22.013 "uuid": "efd7231f-ffc8-408f-a6b2-360fab3f658a", 00:13:22.013 "is_configured": true, 00:13:22.013 "data_offset": 0, 00:13:22.013 "data_size": 65536 00:13:22.013 }, 00:13:22.013 { 00:13:22.013 "name": "BaseBdev4", 00:13:22.013 "uuid": "25bd3a6f-29c0-4350-9b6c-d1e2aed11388", 00:13:22.013 "is_configured": true, 00:13:22.013 "data_offset": 0, 00:13:22.013 "data_size": 65536 00:13:22.013 } 00:13:22.013 ] 00:13:22.013 }' 00:13:22.013 14:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.013 14:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.601 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:22.601 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:22.601 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:22.601 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:22.601 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:22.601 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:22.601 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:22.601 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.601 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.601 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:22.601 [2024-11-20 14:24:01.447419] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:22.601 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.601 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:22.602 "name": "Existed_Raid", 00:13:22.602 "aliases": [ 00:13:22.602 "d193a35e-8021-4593-ab6d-7b0e5ae4d338" 00:13:22.602 ], 00:13:22.602 "product_name": "Raid Volume", 00:13:22.602 "block_size": 512, 00:13:22.602 "num_blocks": 65536, 00:13:22.602 "uuid": "d193a35e-8021-4593-ab6d-7b0e5ae4d338", 00:13:22.602 "assigned_rate_limits": { 00:13:22.602 "rw_ios_per_sec": 0, 00:13:22.602 "rw_mbytes_per_sec": 0, 00:13:22.602 "r_mbytes_per_sec": 0, 00:13:22.602 "w_mbytes_per_sec": 0 00:13:22.602 }, 00:13:22.602 "claimed": false, 00:13:22.602 "zoned": false, 00:13:22.602 "supported_io_types": { 00:13:22.602 "read": true, 00:13:22.602 "write": true, 00:13:22.602 "unmap": false, 00:13:22.602 "flush": false, 00:13:22.602 "reset": true, 00:13:22.602 "nvme_admin": false, 00:13:22.602 "nvme_io": false, 00:13:22.602 "nvme_io_md": false, 00:13:22.602 "write_zeroes": true, 00:13:22.602 "zcopy": false, 00:13:22.602 "get_zone_info": false, 00:13:22.602 "zone_management": false, 00:13:22.602 "zone_append": false, 00:13:22.602 "compare": false, 00:13:22.602 "compare_and_write": false, 00:13:22.602 "abort": false, 00:13:22.602 "seek_hole": false, 00:13:22.602 "seek_data": false, 00:13:22.602 "copy": false, 00:13:22.602 "nvme_iov_md": false 00:13:22.602 }, 00:13:22.602 "memory_domains": [ 00:13:22.602 { 00:13:22.602 "dma_device_id": "system", 00:13:22.602 "dma_device_type": 1 00:13:22.602 }, 00:13:22.602 { 00:13:22.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.602 "dma_device_type": 2 00:13:22.602 }, 00:13:22.602 { 00:13:22.602 "dma_device_id": "system", 00:13:22.602 "dma_device_type": 1 00:13:22.602 }, 00:13:22.602 { 00:13:22.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.602 "dma_device_type": 2 00:13:22.602 }, 00:13:22.602 { 00:13:22.602 "dma_device_id": "system", 00:13:22.602 "dma_device_type": 1 00:13:22.602 }, 00:13:22.602 { 00:13:22.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.602 "dma_device_type": 2 00:13:22.602 }, 00:13:22.602 { 00:13:22.602 "dma_device_id": "system", 00:13:22.602 "dma_device_type": 1 00:13:22.602 }, 00:13:22.602 { 00:13:22.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.602 "dma_device_type": 2 00:13:22.602 } 00:13:22.602 ], 00:13:22.602 "driver_specific": { 00:13:22.602 "raid": { 00:13:22.602 "uuid": "d193a35e-8021-4593-ab6d-7b0e5ae4d338", 00:13:22.602 "strip_size_kb": 0, 00:13:22.602 "state": "online", 00:13:22.602 "raid_level": "raid1", 00:13:22.602 "superblock": false, 00:13:22.602 "num_base_bdevs": 4, 00:13:22.602 "num_base_bdevs_discovered": 4, 00:13:22.602 "num_base_bdevs_operational": 4, 00:13:22.602 "base_bdevs_list": [ 00:13:22.602 { 00:13:22.602 "name": "NewBaseBdev", 00:13:22.602 "uuid": "b6110a17-ac90-4cf9-8ca8-dd9c9ccdd21b", 00:13:22.602 "is_configured": true, 00:13:22.602 "data_offset": 0, 00:13:22.602 "data_size": 65536 00:13:22.602 }, 00:13:22.602 { 00:13:22.602 "name": "BaseBdev2", 00:13:22.602 "uuid": "047ddfcd-6397-4ced-9dc4-5ec453cc8079", 00:13:22.602 "is_configured": true, 00:13:22.602 "data_offset": 0, 00:13:22.602 "data_size": 65536 00:13:22.602 }, 00:13:22.602 { 00:13:22.602 "name": "BaseBdev3", 00:13:22.602 "uuid": "efd7231f-ffc8-408f-a6b2-360fab3f658a", 00:13:22.602 "is_configured": true, 00:13:22.602 "data_offset": 0, 00:13:22.602 "data_size": 65536 00:13:22.602 }, 00:13:22.602 { 00:13:22.602 "name": "BaseBdev4", 00:13:22.602 "uuid": "25bd3a6f-29c0-4350-9b6c-d1e2aed11388", 00:13:22.602 "is_configured": true, 00:13:22.602 "data_offset": 0, 00:13:22.602 "data_size": 65536 00:13:22.602 } 00:13:22.602 ] 00:13:22.602 } 00:13:22.602 } 00:13:22.602 }' 00:13:22.602 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:22.602 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:22.602 BaseBdev2 00:13:22.602 BaseBdev3 00:13:22.602 BaseBdev4' 00:13:22.602 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.858 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:22.858 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.858 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.858 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:22.858 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.858 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.858 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.858 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.858 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.858 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.858 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:22.858 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.858 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.859 [2024-11-20 14:24:01.811085] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:22.859 [2024-11-20 14:24:01.811123] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:22.859 [2024-11-20 14:24:01.811252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.859 [2024-11-20 14:24:01.811618] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.859 [2024-11-20 14:24:01.811652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73319 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73319 ']' 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73319 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.859 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73319 00:13:23.116 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:23.116 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:23.116 killing process with pid 73319 00:13:23.116 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73319' 00:13:23.116 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73319 00:13:23.116 [2024-11-20 14:24:01.851155] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:23.116 14:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73319 00:13:23.374 [2024-11-20 14:24:02.225815] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:24.746 00:13:24.746 real 0m13.153s 00:13:24.746 user 0m21.761s 00:13:24.746 sys 0m1.880s 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.746 ************************************ 00:13:24.746 END TEST raid_state_function_test 00:13:24.746 ************************************ 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.746 14:24:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:13:24.746 14:24:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:24.746 14:24:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.746 14:24:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:24.746 ************************************ 00:13:24.746 START TEST raid_state_function_test_sb 00:13:24.746 ************************************ 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74009 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:24.746 Process raid pid: 74009 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74009' 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74009 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74009 ']' 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.746 14:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.746 [2024-11-20 14:24:03.516677] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:13:24.746 [2024-11-20 14:24:03.516859] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.746 [2024-11-20 14:24:03.712053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.048 [2024-11-20 14:24:03.875652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.306 [2024-11-20 14:24:04.153306] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.306 [2024-11-20 14:24:04.153420] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.873 14:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:25.873 14:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:25.873 14:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:25.873 14:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.873 14:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.873 [2024-11-20 14:24:04.558343] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:25.873 [2024-11-20 14:24:04.558433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:25.873 [2024-11-20 14:24:04.558459] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:25.873 [2024-11-20 14:24:04.558488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:25.873 [2024-11-20 14:24:04.558505] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:25.873 [2024-11-20 14:24:04.558525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:25.873 [2024-11-20 14:24:04.558541] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:25.873 [2024-11-20 14:24:04.558557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:25.873 14:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.873 14:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:25.873 14:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.873 14:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.873 14:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.873 14:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.873 14:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:25.873 14:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.873 14:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.873 14:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.873 14:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.873 14:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.873 14:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.873 14:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.873 14:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.873 14:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.873 14:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.873 "name": "Existed_Raid", 00:13:25.873 "uuid": "aaf730ab-d948-41f3-b6d2-5ddd459c4d61", 00:13:25.873 "strip_size_kb": 0, 00:13:25.873 "state": "configuring", 00:13:25.873 "raid_level": "raid1", 00:13:25.873 "superblock": true, 00:13:25.873 "num_base_bdevs": 4, 00:13:25.873 "num_base_bdevs_discovered": 0, 00:13:25.873 "num_base_bdevs_operational": 4, 00:13:25.873 "base_bdevs_list": [ 00:13:25.873 { 00:13:25.873 "name": "BaseBdev1", 00:13:25.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.873 "is_configured": false, 00:13:25.873 "data_offset": 0, 00:13:25.873 "data_size": 0 00:13:25.873 }, 00:13:25.873 { 00:13:25.873 "name": "BaseBdev2", 00:13:25.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.873 "is_configured": false, 00:13:25.873 "data_offset": 0, 00:13:25.873 "data_size": 0 00:13:25.873 }, 00:13:25.873 { 00:13:25.873 "name": "BaseBdev3", 00:13:25.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.873 "is_configured": false, 00:13:25.873 "data_offset": 0, 00:13:25.873 "data_size": 0 00:13:25.873 }, 00:13:25.873 { 00:13:25.873 "name": "BaseBdev4", 00:13:25.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.873 "is_configured": false, 00:13:25.873 "data_offset": 0, 00:13:25.873 "data_size": 0 00:13:25.873 } 00:13:25.873 ] 00:13:25.873 }' 00:13:25.873 14:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.873 14:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.132 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:26.132 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.132 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.132 [2024-11-20 14:24:05.082835] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:26.132 [2024-11-20 14:24:05.082887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:26.132 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.132 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:26.132 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.132 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.132 [2024-11-20 14:24:05.090826] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:26.132 [2024-11-20 14:24:05.090923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:26.132 [2024-11-20 14:24:05.090937] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:26.132 [2024-11-20 14:24:05.090953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:26.132 [2024-11-20 14:24:05.090963] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:26.132 [2024-11-20 14:24:05.090978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:26.132 [2024-11-20 14:24:05.091002] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:26.132 [2024-11-20 14:24:05.091019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:26.132 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.132 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:26.132 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.132 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.391 [2024-11-20 14:24:05.137123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:26.391 BaseBdev1 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.391 [ 00:13:26.391 { 00:13:26.391 "name": "BaseBdev1", 00:13:26.391 "aliases": [ 00:13:26.391 "5c5738b8-ebc9-4fd7-8c8c-08a54adce19b" 00:13:26.391 ], 00:13:26.391 "product_name": "Malloc disk", 00:13:26.391 "block_size": 512, 00:13:26.391 "num_blocks": 65536, 00:13:26.391 "uuid": "5c5738b8-ebc9-4fd7-8c8c-08a54adce19b", 00:13:26.391 "assigned_rate_limits": { 00:13:26.391 "rw_ios_per_sec": 0, 00:13:26.391 "rw_mbytes_per_sec": 0, 00:13:26.391 "r_mbytes_per_sec": 0, 00:13:26.391 "w_mbytes_per_sec": 0 00:13:26.391 }, 00:13:26.391 "claimed": true, 00:13:26.391 "claim_type": "exclusive_write", 00:13:26.391 "zoned": false, 00:13:26.391 "supported_io_types": { 00:13:26.391 "read": true, 00:13:26.391 "write": true, 00:13:26.391 "unmap": true, 00:13:26.391 "flush": true, 00:13:26.391 "reset": true, 00:13:26.391 "nvme_admin": false, 00:13:26.391 "nvme_io": false, 00:13:26.391 "nvme_io_md": false, 00:13:26.391 "write_zeroes": true, 00:13:26.391 "zcopy": true, 00:13:26.391 "get_zone_info": false, 00:13:26.391 "zone_management": false, 00:13:26.391 "zone_append": false, 00:13:26.391 "compare": false, 00:13:26.391 "compare_and_write": false, 00:13:26.391 "abort": true, 00:13:26.391 "seek_hole": false, 00:13:26.391 "seek_data": false, 00:13:26.391 "copy": true, 00:13:26.391 "nvme_iov_md": false 00:13:26.391 }, 00:13:26.391 "memory_domains": [ 00:13:26.391 { 00:13:26.391 "dma_device_id": "system", 00:13:26.391 "dma_device_type": 1 00:13:26.391 }, 00:13:26.391 { 00:13:26.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.391 "dma_device_type": 2 00:13:26.391 } 00:13:26.391 ], 00:13:26.391 "driver_specific": {} 00:13:26.391 } 00:13:26.391 ] 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.391 "name": "Existed_Raid", 00:13:26.391 "uuid": "a2d51736-b5f0-46ea-bc8b-5acbd58d3f5c", 00:13:26.391 "strip_size_kb": 0, 00:13:26.391 "state": "configuring", 00:13:26.391 "raid_level": "raid1", 00:13:26.391 "superblock": true, 00:13:26.391 "num_base_bdevs": 4, 00:13:26.391 "num_base_bdevs_discovered": 1, 00:13:26.391 "num_base_bdevs_operational": 4, 00:13:26.391 "base_bdevs_list": [ 00:13:26.391 { 00:13:26.391 "name": "BaseBdev1", 00:13:26.391 "uuid": "5c5738b8-ebc9-4fd7-8c8c-08a54adce19b", 00:13:26.391 "is_configured": true, 00:13:26.391 "data_offset": 2048, 00:13:26.391 "data_size": 63488 00:13:26.391 }, 00:13:26.391 { 00:13:26.391 "name": "BaseBdev2", 00:13:26.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.391 "is_configured": false, 00:13:26.391 "data_offset": 0, 00:13:26.391 "data_size": 0 00:13:26.391 }, 00:13:26.391 { 00:13:26.391 "name": "BaseBdev3", 00:13:26.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.391 "is_configured": false, 00:13:26.391 "data_offset": 0, 00:13:26.391 "data_size": 0 00:13:26.391 }, 00:13:26.391 { 00:13:26.391 "name": "BaseBdev4", 00:13:26.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.391 "is_configured": false, 00:13:26.391 "data_offset": 0, 00:13:26.391 "data_size": 0 00:13:26.391 } 00:13:26.391 ] 00:13:26.391 }' 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.391 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.957 [2024-11-20 14:24:05.697327] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:26.957 [2024-11-20 14:24:05.697393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.957 [2024-11-20 14:24:05.709402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:26.957 [2024-11-20 14:24:05.712059] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:26.957 [2024-11-20 14:24:05.712122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:26.957 [2024-11-20 14:24:05.712142] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:26.957 [2024-11-20 14:24:05.712165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:26.957 [2024-11-20 14:24:05.712179] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:26.957 [2024-11-20 14:24:05.712197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.957 "name": "Existed_Raid", 00:13:26.957 "uuid": "8dafbd71-2489-435c-b2f4-532dd817b3bc", 00:13:26.957 "strip_size_kb": 0, 00:13:26.957 "state": "configuring", 00:13:26.957 "raid_level": "raid1", 00:13:26.957 "superblock": true, 00:13:26.957 "num_base_bdevs": 4, 00:13:26.957 "num_base_bdevs_discovered": 1, 00:13:26.957 "num_base_bdevs_operational": 4, 00:13:26.957 "base_bdevs_list": [ 00:13:26.957 { 00:13:26.957 "name": "BaseBdev1", 00:13:26.957 "uuid": "5c5738b8-ebc9-4fd7-8c8c-08a54adce19b", 00:13:26.957 "is_configured": true, 00:13:26.957 "data_offset": 2048, 00:13:26.957 "data_size": 63488 00:13:26.957 }, 00:13:26.957 { 00:13:26.957 "name": "BaseBdev2", 00:13:26.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.957 "is_configured": false, 00:13:26.957 "data_offset": 0, 00:13:26.957 "data_size": 0 00:13:26.957 }, 00:13:26.957 { 00:13:26.957 "name": "BaseBdev3", 00:13:26.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.957 "is_configured": false, 00:13:26.957 "data_offset": 0, 00:13:26.957 "data_size": 0 00:13:26.957 }, 00:13:26.957 { 00:13:26.957 "name": "BaseBdev4", 00:13:26.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.957 "is_configured": false, 00:13:26.957 "data_offset": 0, 00:13:26.957 "data_size": 0 00:13:26.957 } 00:13:26.957 ] 00:13:26.957 }' 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.957 14:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.577 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:27.577 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.577 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.577 [2024-11-20 14:24:06.269683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:27.577 BaseBdev2 00:13:27.577 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.577 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:27.577 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:27.577 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:27.577 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:27.577 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:27.577 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:27.577 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:27.577 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.577 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.577 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.577 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:27.577 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.577 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.577 [ 00:13:27.577 { 00:13:27.577 "name": "BaseBdev2", 00:13:27.577 "aliases": [ 00:13:27.577 "e35d8cd8-45e1-4dfe-ba71-84b44da56ad9" 00:13:27.577 ], 00:13:27.577 "product_name": "Malloc disk", 00:13:27.577 "block_size": 512, 00:13:27.577 "num_blocks": 65536, 00:13:27.577 "uuid": "e35d8cd8-45e1-4dfe-ba71-84b44da56ad9", 00:13:27.577 "assigned_rate_limits": { 00:13:27.577 "rw_ios_per_sec": 0, 00:13:27.577 "rw_mbytes_per_sec": 0, 00:13:27.577 "r_mbytes_per_sec": 0, 00:13:27.577 "w_mbytes_per_sec": 0 00:13:27.577 }, 00:13:27.577 "claimed": true, 00:13:27.577 "claim_type": "exclusive_write", 00:13:27.577 "zoned": false, 00:13:27.577 "supported_io_types": { 00:13:27.577 "read": true, 00:13:27.577 "write": true, 00:13:27.577 "unmap": true, 00:13:27.577 "flush": true, 00:13:27.577 "reset": true, 00:13:27.577 "nvme_admin": false, 00:13:27.577 "nvme_io": false, 00:13:27.577 "nvme_io_md": false, 00:13:27.577 "write_zeroes": true, 00:13:27.577 "zcopy": true, 00:13:27.577 "get_zone_info": false, 00:13:27.577 "zone_management": false, 00:13:27.577 "zone_append": false, 00:13:27.577 "compare": false, 00:13:27.577 "compare_and_write": false, 00:13:27.577 "abort": true, 00:13:27.577 "seek_hole": false, 00:13:27.577 "seek_data": false, 00:13:27.577 "copy": true, 00:13:27.577 "nvme_iov_md": false 00:13:27.577 }, 00:13:27.577 "memory_domains": [ 00:13:27.577 { 00:13:27.577 "dma_device_id": "system", 00:13:27.578 "dma_device_type": 1 00:13:27.578 }, 00:13:27.578 { 00:13:27.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.578 "dma_device_type": 2 00:13:27.578 } 00:13:27.578 ], 00:13:27.578 "driver_specific": {} 00:13:27.578 } 00:13:27.578 ] 00:13:27.578 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.578 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:27.578 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:27.578 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:27.578 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:27.578 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.578 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.578 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.578 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.578 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.578 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.578 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.578 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.578 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.578 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.578 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.578 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.578 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.578 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.578 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.578 "name": "Existed_Raid", 00:13:27.578 "uuid": "8dafbd71-2489-435c-b2f4-532dd817b3bc", 00:13:27.578 "strip_size_kb": 0, 00:13:27.578 "state": "configuring", 00:13:27.578 "raid_level": "raid1", 00:13:27.578 "superblock": true, 00:13:27.578 "num_base_bdevs": 4, 00:13:27.578 "num_base_bdevs_discovered": 2, 00:13:27.578 "num_base_bdevs_operational": 4, 00:13:27.578 "base_bdevs_list": [ 00:13:27.578 { 00:13:27.578 "name": "BaseBdev1", 00:13:27.578 "uuid": "5c5738b8-ebc9-4fd7-8c8c-08a54adce19b", 00:13:27.578 "is_configured": true, 00:13:27.578 "data_offset": 2048, 00:13:27.578 "data_size": 63488 00:13:27.578 }, 00:13:27.578 { 00:13:27.578 "name": "BaseBdev2", 00:13:27.578 "uuid": "e35d8cd8-45e1-4dfe-ba71-84b44da56ad9", 00:13:27.578 "is_configured": true, 00:13:27.578 "data_offset": 2048, 00:13:27.578 "data_size": 63488 00:13:27.578 }, 00:13:27.578 { 00:13:27.578 "name": "BaseBdev3", 00:13:27.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.578 "is_configured": false, 00:13:27.578 "data_offset": 0, 00:13:27.578 "data_size": 0 00:13:27.578 }, 00:13:27.578 { 00:13:27.578 "name": "BaseBdev4", 00:13:27.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.578 "is_configured": false, 00:13:27.578 "data_offset": 0, 00:13:27.578 "data_size": 0 00:13:27.578 } 00:13:27.578 ] 00:13:27.578 }' 00:13:27.578 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.578 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.145 [2024-11-20 14:24:06.878005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:28.145 BaseBdev3 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.145 [ 00:13:28.145 { 00:13:28.145 "name": "BaseBdev3", 00:13:28.145 "aliases": [ 00:13:28.145 "c26fcde7-4cfa-461d-ab25-d12d916368d8" 00:13:28.145 ], 00:13:28.145 "product_name": "Malloc disk", 00:13:28.145 "block_size": 512, 00:13:28.145 "num_blocks": 65536, 00:13:28.145 "uuid": "c26fcde7-4cfa-461d-ab25-d12d916368d8", 00:13:28.145 "assigned_rate_limits": { 00:13:28.145 "rw_ios_per_sec": 0, 00:13:28.145 "rw_mbytes_per_sec": 0, 00:13:28.145 "r_mbytes_per_sec": 0, 00:13:28.145 "w_mbytes_per_sec": 0 00:13:28.145 }, 00:13:28.145 "claimed": true, 00:13:28.145 "claim_type": "exclusive_write", 00:13:28.145 "zoned": false, 00:13:28.145 "supported_io_types": { 00:13:28.145 "read": true, 00:13:28.145 "write": true, 00:13:28.145 "unmap": true, 00:13:28.145 "flush": true, 00:13:28.145 "reset": true, 00:13:28.145 "nvme_admin": false, 00:13:28.145 "nvme_io": false, 00:13:28.145 "nvme_io_md": false, 00:13:28.145 "write_zeroes": true, 00:13:28.145 "zcopy": true, 00:13:28.145 "get_zone_info": false, 00:13:28.145 "zone_management": false, 00:13:28.145 "zone_append": false, 00:13:28.145 "compare": false, 00:13:28.145 "compare_and_write": false, 00:13:28.145 "abort": true, 00:13:28.145 "seek_hole": false, 00:13:28.145 "seek_data": false, 00:13:28.145 "copy": true, 00:13:28.145 "nvme_iov_md": false 00:13:28.145 }, 00:13:28.145 "memory_domains": [ 00:13:28.145 { 00:13:28.145 "dma_device_id": "system", 00:13:28.145 "dma_device_type": 1 00:13:28.145 }, 00:13:28.145 { 00:13:28.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.145 "dma_device_type": 2 00:13:28.145 } 00:13:28.145 ], 00:13:28.145 "driver_specific": {} 00:13:28.145 } 00:13:28.145 ] 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.145 "name": "Existed_Raid", 00:13:28.145 "uuid": "8dafbd71-2489-435c-b2f4-532dd817b3bc", 00:13:28.145 "strip_size_kb": 0, 00:13:28.145 "state": "configuring", 00:13:28.145 "raid_level": "raid1", 00:13:28.145 "superblock": true, 00:13:28.145 "num_base_bdevs": 4, 00:13:28.145 "num_base_bdevs_discovered": 3, 00:13:28.145 "num_base_bdevs_operational": 4, 00:13:28.145 "base_bdevs_list": [ 00:13:28.145 { 00:13:28.145 "name": "BaseBdev1", 00:13:28.145 "uuid": "5c5738b8-ebc9-4fd7-8c8c-08a54adce19b", 00:13:28.145 "is_configured": true, 00:13:28.145 "data_offset": 2048, 00:13:28.145 "data_size": 63488 00:13:28.145 }, 00:13:28.145 { 00:13:28.145 "name": "BaseBdev2", 00:13:28.145 "uuid": "e35d8cd8-45e1-4dfe-ba71-84b44da56ad9", 00:13:28.145 "is_configured": true, 00:13:28.145 "data_offset": 2048, 00:13:28.145 "data_size": 63488 00:13:28.145 }, 00:13:28.145 { 00:13:28.145 "name": "BaseBdev3", 00:13:28.145 "uuid": "c26fcde7-4cfa-461d-ab25-d12d916368d8", 00:13:28.145 "is_configured": true, 00:13:28.145 "data_offset": 2048, 00:13:28.145 "data_size": 63488 00:13:28.145 }, 00:13:28.145 { 00:13:28.145 "name": "BaseBdev4", 00:13:28.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.145 "is_configured": false, 00:13:28.145 "data_offset": 0, 00:13:28.145 "data_size": 0 00:13:28.145 } 00:13:28.145 ] 00:13:28.145 }' 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.145 14:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.713 [2024-11-20 14:24:07.494490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:28.713 [2024-11-20 14:24:07.494820] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:28.713 [2024-11-20 14:24:07.494841] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:28.713 [2024-11-20 14:24:07.495215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:28.713 BaseBdev4 00:13:28.713 [2024-11-20 14:24:07.495422] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:28.713 [2024-11-20 14:24:07.495444] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:28.713 [2024-11-20 14:24:07.495622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.713 [ 00:13:28.713 { 00:13:28.713 "name": "BaseBdev4", 00:13:28.713 "aliases": [ 00:13:28.713 "32ab4883-3a3a-4615-b423-da4c4e0e87a3" 00:13:28.713 ], 00:13:28.713 "product_name": "Malloc disk", 00:13:28.713 "block_size": 512, 00:13:28.713 "num_blocks": 65536, 00:13:28.713 "uuid": "32ab4883-3a3a-4615-b423-da4c4e0e87a3", 00:13:28.713 "assigned_rate_limits": { 00:13:28.713 "rw_ios_per_sec": 0, 00:13:28.713 "rw_mbytes_per_sec": 0, 00:13:28.713 "r_mbytes_per_sec": 0, 00:13:28.713 "w_mbytes_per_sec": 0 00:13:28.713 }, 00:13:28.713 "claimed": true, 00:13:28.713 "claim_type": "exclusive_write", 00:13:28.713 "zoned": false, 00:13:28.713 "supported_io_types": { 00:13:28.713 "read": true, 00:13:28.713 "write": true, 00:13:28.713 "unmap": true, 00:13:28.713 "flush": true, 00:13:28.713 "reset": true, 00:13:28.713 "nvme_admin": false, 00:13:28.713 "nvme_io": false, 00:13:28.713 "nvme_io_md": false, 00:13:28.713 "write_zeroes": true, 00:13:28.713 "zcopy": true, 00:13:28.713 "get_zone_info": false, 00:13:28.713 "zone_management": false, 00:13:28.713 "zone_append": false, 00:13:28.713 "compare": false, 00:13:28.713 "compare_and_write": false, 00:13:28.713 "abort": true, 00:13:28.713 "seek_hole": false, 00:13:28.713 "seek_data": false, 00:13:28.713 "copy": true, 00:13:28.713 "nvme_iov_md": false 00:13:28.713 }, 00:13:28.713 "memory_domains": [ 00:13:28.713 { 00:13:28.713 "dma_device_id": "system", 00:13:28.713 "dma_device_type": 1 00:13:28.713 }, 00:13:28.713 { 00:13:28.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.713 "dma_device_type": 2 00:13:28.713 } 00:13:28.713 ], 00:13:28.713 "driver_specific": {} 00:13:28.713 } 00:13:28.713 ] 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.713 "name": "Existed_Raid", 00:13:28.713 "uuid": "8dafbd71-2489-435c-b2f4-532dd817b3bc", 00:13:28.713 "strip_size_kb": 0, 00:13:28.713 "state": "online", 00:13:28.713 "raid_level": "raid1", 00:13:28.713 "superblock": true, 00:13:28.713 "num_base_bdevs": 4, 00:13:28.713 "num_base_bdevs_discovered": 4, 00:13:28.713 "num_base_bdevs_operational": 4, 00:13:28.713 "base_bdevs_list": [ 00:13:28.713 { 00:13:28.713 "name": "BaseBdev1", 00:13:28.713 "uuid": "5c5738b8-ebc9-4fd7-8c8c-08a54adce19b", 00:13:28.713 "is_configured": true, 00:13:28.713 "data_offset": 2048, 00:13:28.713 "data_size": 63488 00:13:28.713 }, 00:13:28.713 { 00:13:28.713 "name": "BaseBdev2", 00:13:28.713 "uuid": "e35d8cd8-45e1-4dfe-ba71-84b44da56ad9", 00:13:28.713 "is_configured": true, 00:13:28.713 "data_offset": 2048, 00:13:28.713 "data_size": 63488 00:13:28.713 }, 00:13:28.713 { 00:13:28.713 "name": "BaseBdev3", 00:13:28.713 "uuid": "c26fcde7-4cfa-461d-ab25-d12d916368d8", 00:13:28.713 "is_configured": true, 00:13:28.713 "data_offset": 2048, 00:13:28.713 "data_size": 63488 00:13:28.713 }, 00:13:28.713 { 00:13:28.713 "name": "BaseBdev4", 00:13:28.713 "uuid": "32ab4883-3a3a-4615-b423-da4c4e0e87a3", 00:13:28.713 "is_configured": true, 00:13:28.713 "data_offset": 2048, 00:13:28.713 "data_size": 63488 00:13:28.713 } 00:13:28.713 ] 00:13:28.713 }' 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.713 14:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.280 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:29.280 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:29.280 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:29.280 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:29.280 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:29.280 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:29.280 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:29.280 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:29.280 14:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.280 14:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.280 [2024-11-20 14:24:08.051196] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:29.280 14:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.280 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:29.280 "name": "Existed_Raid", 00:13:29.280 "aliases": [ 00:13:29.280 "8dafbd71-2489-435c-b2f4-532dd817b3bc" 00:13:29.280 ], 00:13:29.280 "product_name": "Raid Volume", 00:13:29.280 "block_size": 512, 00:13:29.280 "num_blocks": 63488, 00:13:29.280 "uuid": "8dafbd71-2489-435c-b2f4-532dd817b3bc", 00:13:29.280 "assigned_rate_limits": { 00:13:29.280 "rw_ios_per_sec": 0, 00:13:29.280 "rw_mbytes_per_sec": 0, 00:13:29.280 "r_mbytes_per_sec": 0, 00:13:29.280 "w_mbytes_per_sec": 0 00:13:29.280 }, 00:13:29.280 "claimed": false, 00:13:29.280 "zoned": false, 00:13:29.280 "supported_io_types": { 00:13:29.280 "read": true, 00:13:29.280 "write": true, 00:13:29.280 "unmap": false, 00:13:29.280 "flush": false, 00:13:29.281 "reset": true, 00:13:29.281 "nvme_admin": false, 00:13:29.281 "nvme_io": false, 00:13:29.281 "nvme_io_md": false, 00:13:29.281 "write_zeroes": true, 00:13:29.281 "zcopy": false, 00:13:29.281 "get_zone_info": false, 00:13:29.281 "zone_management": false, 00:13:29.281 "zone_append": false, 00:13:29.281 "compare": false, 00:13:29.281 "compare_and_write": false, 00:13:29.281 "abort": false, 00:13:29.281 "seek_hole": false, 00:13:29.281 "seek_data": false, 00:13:29.281 "copy": false, 00:13:29.281 "nvme_iov_md": false 00:13:29.281 }, 00:13:29.281 "memory_domains": [ 00:13:29.281 { 00:13:29.281 "dma_device_id": "system", 00:13:29.281 "dma_device_type": 1 00:13:29.281 }, 00:13:29.281 { 00:13:29.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.281 "dma_device_type": 2 00:13:29.281 }, 00:13:29.281 { 00:13:29.281 "dma_device_id": "system", 00:13:29.281 "dma_device_type": 1 00:13:29.281 }, 00:13:29.281 { 00:13:29.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.281 "dma_device_type": 2 00:13:29.281 }, 00:13:29.281 { 00:13:29.281 "dma_device_id": "system", 00:13:29.281 "dma_device_type": 1 00:13:29.281 }, 00:13:29.281 { 00:13:29.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.281 "dma_device_type": 2 00:13:29.281 }, 00:13:29.281 { 00:13:29.281 "dma_device_id": "system", 00:13:29.281 "dma_device_type": 1 00:13:29.281 }, 00:13:29.281 { 00:13:29.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.281 "dma_device_type": 2 00:13:29.281 } 00:13:29.281 ], 00:13:29.281 "driver_specific": { 00:13:29.281 "raid": { 00:13:29.281 "uuid": "8dafbd71-2489-435c-b2f4-532dd817b3bc", 00:13:29.281 "strip_size_kb": 0, 00:13:29.281 "state": "online", 00:13:29.281 "raid_level": "raid1", 00:13:29.281 "superblock": true, 00:13:29.281 "num_base_bdevs": 4, 00:13:29.281 "num_base_bdevs_discovered": 4, 00:13:29.281 "num_base_bdevs_operational": 4, 00:13:29.281 "base_bdevs_list": [ 00:13:29.281 { 00:13:29.281 "name": "BaseBdev1", 00:13:29.281 "uuid": "5c5738b8-ebc9-4fd7-8c8c-08a54adce19b", 00:13:29.281 "is_configured": true, 00:13:29.281 "data_offset": 2048, 00:13:29.281 "data_size": 63488 00:13:29.281 }, 00:13:29.281 { 00:13:29.281 "name": "BaseBdev2", 00:13:29.281 "uuid": "e35d8cd8-45e1-4dfe-ba71-84b44da56ad9", 00:13:29.281 "is_configured": true, 00:13:29.281 "data_offset": 2048, 00:13:29.281 "data_size": 63488 00:13:29.281 }, 00:13:29.281 { 00:13:29.281 "name": "BaseBdev3", 00:13:29.281 "uuid": "c26fcde7-4cfa-461d-ab25-d12d916368d8", 00:13:29.281 "is_configured": true, 00:13:29.281 "data_offset": 2048, 00:13:29.281 "data_size": 63488 00:13:29.281 }, 00:13:29.281 { 00:13:29.281 "name": "BaseBdev4", 00:13:29.281 "uuid": "32ab4883-3a3a-4615-b423-da4c4e0e87a3", 00:13:29.281 "is_configured": true, 00:13:29.281 "data_offset": 2048, 00:13:29.281 "data_size": 63488 00:13:29.281 } 00:13:29.281 ] 00:13:29.281 } 00:13:29.281 } 00:13:29.281 }' 00:13:29.281 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:29.281 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:29.281 BaseBdev2 00:13:29.281 BaseBdev3 00:13:29.281 BaseBdev4' 00:13:29.281 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.281 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:29.281 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:29.281 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.281 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:29.281 14:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.281 14:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.281 14:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.540 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:29.540 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:29.540 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:29.540 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:29.540 14:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.540 14:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.540 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.540 14:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.540 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:29.540 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:29.540 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:29.540 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:29.540 14:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.541 14:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.541 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.541 14:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.541 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:29.541 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:29.541 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:29.541 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:29.541 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.541 14:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.541 14:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.541 14:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.541 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:29.541 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:29.541 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:29.541 14:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.541 14:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.541 [2024-11-20 14:24:08.430940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:29.799 14:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.799 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:29.799 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:29.799 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:29.799 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:29.799 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:29.799 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:29.799 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.799 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.799 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.799 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.799 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.799 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.799 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.799 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.799 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.800 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.800 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.800 14:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.800 14:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.800 14:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.800 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.800 "name": "Existed_Raid", 00:13:29.800 "uuid": "8dafbd71-2489-435c-b2f4-532dd817b3bc", 00:13:29.800 "strip_size_kb": 0, 00:13:29.800 "state": "online", 00:13:29.800 "raid_level": "raid1", 00:13:29.800 "superblock": true, 00:13:29.800 "num_base_bdevs": 4, 00:13:29.800 "num_base_bdevs_discovered": 3, 00:13:29.800 "num_base_bdevs_operational": 3, 00:13:29.800 "base_bdevs_list": [ 00:13:29.800 { 00:13:29.800 "name": null, 00:13:29.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.800 "is_configured": false, 00:13:29.800 "data_offset": 0, 00:13:29.800 "data_size": 63488 00:13:29.800 }, 00:13:29.800 { 00:13:29.800 "name": "BaseBdev2", 00:13:29.800 "uuid": "e35d8cd8-45e1-4dfe-ba71-84b44da56ad9", 00:13:29.800 "is_configured": true, 00:13:29.800 "data_offset": 2048, 00:13:29.800 "data_size": 63488 00:13:29.800 }, 00:13:29.800 { 00:13:29.800 "name": "BaseBdev3", 00:13:29.800 "uuid": "c26fcde7-4cfa-461d-ab25-d12d916368d8", 00:13:29.800 "is_configured": true, 00:13:29.800 "data_offset": 2048, 00:13:29.800 "data_size": 63488 00:13:29.800 }, 00:13:29.800 { 00:13:29.800 "name": "BaseBdev4", 00:13:29.800 "uuid": "32ab4883-3a3a-4615-b423-da4c4e0e87a3", 00:13:29.800 "is_configured": true, 00:13:29.800 "data_offset": 2048, 00:13:29.800 "data_size": 63488 00:13:29.800 } 00:13:29.800 ] 00:13:29.800 }' 00:13:29.800 14:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.800 14:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.367 [2024-11-20 14:24:09.138798] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.367 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.367 [2024-11-20 14:24:09.287382] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.626 [2024-11-20 14:24:09.433649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:30.626 [2024-11-20 14:24:09.433913] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:30.626 [2024-11-20 14:24:09.527417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.626 [2024-11-20 14:24:09.527682] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:30.626 [2024-11-20 14:24:09.527717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.626 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.885 BaseBdev2 00:13:30.885 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.885 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:30.885 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:30.885 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:30.885 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:30.885 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:30.885 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:30.885 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:30.885 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.885 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.885 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.885 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:30.885 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.885 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.885 [ 00:13:30.885 { 00:13:30.885 "name": "BaseBdev2", 00:13:30.885 "aliases": [ 00:13:30.885 "dd4da7d4-1d25-43d7-a24d-dbe675bd8539" 00:13:30.885 ], 00:13:30.885 "product_name": "Malloc disk", 00:13:30.885 "block_size": 512, 00:13:30.885 "num_blocks": 65536, 00:13:30.885 "uuid": "dd4da7d4-1d25-43d7-a24d-dbe675bd8539", 00:13:30.885 "assigned_rate_limits": { 00:13:30.885 "rw_ios_per_sec": 0, 00:13:30.885 "rw_mbytes_per_sec": 0, 00:13:30.885 "r_mbytes_per_sec": 0, 00:13:30.885 "w_mbytes_per_sec": 0 00:13:30.885 }, 00:13:30.885 "claimed": false, 00:13:30.885 "zoned": false, 00:13:30.885 "supported_io_types": { 00:13:30.885 "read": true, 00:13:30.885 "write": true, 00:13:30.885 "unmap": true, 00:13:30.885 "flush": true, 00:13:30.885 "reset": true, 00:13:30.885 "nvme_admin": false, 00:13:30.885 "nvme_io": false, 00:13:30.885 "nvme_io_md": false, 00:13:30.885 "write_zeroes": true, 00:13:30.885 "zcopy": true, 00:13:30.885 "get_zone_info": false, 00:13:30.885 "zone_management": false, 00:13:30.885 "zone_append": false, 00:13:30.885 "compare": false, 00:13:30.885 "compare_and_write": false, 00:13:30.885 "abort": true, 00:13:30.885 "seek_hole": false, 00:13:30.885 "seek_data": false, 00:13:30.885 "copy": true, 00:13:30.885 "nvme_iov_md": false 00:13:30.885 }, 00:13:30.885 "memory_domains": [ 00:13:30.885 { 00:13:30.885 "dma_device_id": "system", 00:13:30.885 "dma_device_type": 1 00:13:30.885 }, 00:13:30.885 { 00:13:30.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.885 "dma_device_type": 2 00:13:30.885 } 00:13:30.885 ], 00:13:30.885 "driver_specific": {} 00:13:30.885 } 00:13:30.885 ] 00:13:30.885 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.885 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:30.885 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:30.885 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:30.885 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:30.885 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.885 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.885 BaseBdev3 00:13:30.885 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.885 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.886 [ 00:13:30.886 { 00:13:30.886 "name": "BaseBdev3", 00:13:30.886 "aliases": [ 00:13:30.886 "40be4775-9a79-49dd-bbca-eadfcc23b13d" 00:13:30.886 ], 00:13:30.886 "product_name": "Malloc disk", 00:13:30.886 "block_size": 512, 00:13:30.886 "num_blocks": 65536, 00:13:30.886 "uuid": "40be4775-9a79-49dd-bbca-eadfcc23b13d", 00:13:30.886 "assigned_rate_limits": { 00:13:30.886 "rw_ios_per_sec": 0, 00:13:30.886 "rw_mbytes_per_sec": 0, 00:13:30.886 "r_mbytes_per_sec": 0, 00:13:30.886 "w_mbytes_per_sec": 0 00:13:30.886 }, 00:13:30.886 "claimed": false, 00:13:30.886 "zoned": false, 00:13:30.886 "supported_io_types": { 00:13:30.886 "read": true, 00:13:30.886 "write": true, 00:13:30.886 "unmap": true, 00:13:30.886 "flush": true, 00:13:30.886 "reset": true, 00:13:30.886 "nvme_admin": false, 00:13:30.886 "nvme_io": false, 00:13:30.886 "nvme_io_md": false, 00:13:30.886 "write_zeroes": true, 00:13:30.886 "zcopy": true, 00:13:30.886 "get_zone_info": false, 00:13:30.886 "zone_management": false, 00:13:30.886 "zone_append": false, 00:13:30.886 "compare": false, 00:13:30.886 "compare_and_write": false, 00:13:30.886 "abort": true, 00:13:30.886 "seek_hole": false, 00:13:30.886 "seek_data": false, 00:13:30.886 "copy": true, 00:13:30.886 "nvme_iov_md": false 00:13:30.886 }, 00:13:30.886 "memory_domains": [ 00:13:30.886 { 00:13:30.886 "dma_device_id": "system", 00:13:30.886 "dma_device_type": 1 00:13:30.886 }, 00:13:30.886 { 00:13:30.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.886 "dma_device_type": 2 00:13:30.886 } 00:13:30.886 ], 00:13:30.886 "driver_specific": {} 00:13:30.886 } 00:13:30.886 ] 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.886 BaseBdev4 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.886 [ 00:13:30.886 { 00:13:30.886 "name": "BaseBdev4", 00:13:30.886 "aliases": [ 00:13:30.886 "e03af4f3-07ab-4034-9b39-ad71627eab91" 00:13:30.886 ], 00:13:30.886 "product_name": "Malloc disk", 00:13:30.886 "block_size": 512, 00:13:30.886 "num_blocks": 65536, 00:13:30.886 "uuid": "e03af4f3-07ab-4034-9b39-ad71627eab91", 00:13:30.886 "assigned_rate_limits": { 00:13:30.886 "rw_ios_per_sec": 0, 00:13:30.886 "rw_mbytes_per_sec": 0, 00:13:30.886 "r_mbytes_per_sec": 0, 00:13:30.886 "w_mbytes_per_sec": 0 00:13:30.886 }, 00:13:30.886 "claimed": false, 00:13:30.886 "zoned": false, 00:13:30.886 "supported_io_types": { 00:13:30.886 "read": true, 00:13:30.886 "write": true, 00:13:30.886 "unmap": true, 00:13:30.886 "flush": true, 00:13:30.886 "reset": true, 00:13:30.886 "nvme_admin": false, 00:13:30.886 "nvme_io": false, 00:13:30.886 "nvme_io_md": false, 00:13:30.886 "write_zeroes": true, 00:13:30.886 "zcopy": true, 00:13:30.886 "get_zone_info": false, 00:13:30.886 "zone_management": false, 00:13:30.886 "zone_append": false, 00:13:30.886 "compare": false, 00:13:30.886 "compare_and_write": false, 00:13:30.886 "abort": true, 00:13:30.886 "seek_hole": false, 00:13:30.886 "seek_data": false, 00:13:30.886 "copy": true, 00:13:30.886 "nvme_iov_md": false 00:13:30.886 }, 00:13:30.886 "memory_domains": [ 00:13:30.886 { 00:13:30.886 "dma_device_id": "system", 00:13:30.886 "dma_device_type": 1 00:13:30.886 }, 00:13:30.886 { 00:13:30.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.886 "dma_device_type": 2 00:13:30.886 } 00:13:30.886 ], 00:13:30.886 "driver_specific": {} 00:13:30.886 } 00:13:30.886 ] 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.886 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.886 [2024-11-20 14:24:09.830280] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:30.886 [2024-11-20 14:24:09.830340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:30.886 [2024-11-20 14:24:09.830369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:30.887 [2024-11-20 14:24:09.832762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:30.887 [2024-11-20 14:24:09.832831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:30.887 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.887 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:30.887 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.887 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.887 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.887 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.887 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:30.887 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.887 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.887 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.887 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.887 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.887 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.887 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.887 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.887 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.145 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.145 "name": "Existed_Raid", 00:13:31.145 "uuid": "de6e3928-99e8-41bf-b029-d4c2d0a7c270", 00:13:31.145 "strip_size_kb": 0, 00:13:31.145 "state": "configuring", 00:13:31.145 "raid_level": "raid1", 00:13:31.145 "superblock": true, 00:13:31.145 "num_base_bdevs": 4, 00:13:31.145 "num_base_bdevs_discovered": 3, 00:13:31.145 "num_base_bdevs_operational": 4, 00:13:31.145 "base_bdevs_list": [ 00:13:31.145 { 00:13:31.145 "name": "BaseBdev1", 00:13:31.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.145 "is_configured": false, 00:13:31.145 "data_offset": 0, 00:13:31.145 "data_size": 0 00:13:31.145 }, 00:13:31.145 { 00:13:31.145 "name": "BaseBdev2", 00:13:31.145 "uuid": "dd4da7d4-1d25-43d7-a24d-dbe675bd8539", 00:13:31.145 "is_configured": true, 00:13:31.145 "data_offset": 2048, 00:13:31.145 "data_size": 63488 00:13:31.145 }, 00:13:31.145 { 00:13:31.145 "name": "BaseBdev3", 00:13:31.145 "uuid": "40be4775-9a79-49dd-bbca-eadfcc23b13d", 00:13:31.145 "is_configured": true, 00:13:31.145 "data_offset": 2048, 00:13:31.145 "data_size": 63488 00:13:31.145 }, 00:13:31.145 { 00:13:31.145 "name": "BaseBdev4", 00:13:31.145 "uuid": "e03af4f3-07ab-4034-9b39-ad71627eab91", 00:13:31.145 "is_configured": true, 00:13:31.145 "data_offset": 2048, 00:13:31.145 "data_size": 63488 00:13:31.145 } 00:13:31.145 ] 00:13:31.145 }' 00:13:31.145 14:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.145 14:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.404 14:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:31.404 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.404 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.404 [2024-11-20 14:24:10.362441] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:31.404 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.404 14:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:31.404 14:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.404 14:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.404 14:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.404 14:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.404 14:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:31.404 14:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.404 14:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.404 14:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.404 14:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.404 14:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.404 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.404 14:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.404 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.662 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.662 14:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.662 "name": "Existed_Raid", 00:13:31.662 "uuid": "de6e3928-99e8-41bf-b029-d4c2d0a7c270", 00:13:31.662 "strip_size_kb": 0, 00:13:31.662 "state": "configuring", 00:13:31.662 "raid_level": "raid1", 00:13:31.662 "superblock": true, 00:13:31.662 "num_base_bdevs": 4, 00:13:31.662 "num_base_bdevs_discovered": 2, 00:13:31.662 "num_base_bdevs_operational": 4, 00:13:31.662 "base_bdevs_list": [ 00:13:31.662 { 00:13:31.662 "name": "BaseBdev1", 00:13:31.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.662 "is_configured": false, 00:13:31.662 "data_offset": 0, 00:13:31.662 "data_size": 0 00:13:31.662 }, 00:13:31.662 { 00:13:31.662 "name": null, 00:13:31.662 "uuid": "dd4da7d4-1d25-43d7-a24d-dbe675bd8539", 00:13:31.662 "is_configured": false, 00:13:31.662 "data_offset": 0, 00:13:31.662 "data_size": 63488 00:13:31.662 }, 00:13:31.662 { 00:13:31.662 "name": "BaseBdev3", 00:13:31.662 "uuid": "40be4775-9a79-49dd-bbca-eadfcc23b13d", 00:13:31.662 "is_configured": true, 00:13:31.662 "data_offset": 2048, 00:13:31.662 "data_size": 63488 00:13:31.662 }, 00:13:31.662 { 00:13:31.662 "name": "BaseBdev4", 00:13:31.662 "uuid": "e03af4f3-07ab-4034-9b39-ad71627eab91", 00:13:31.662 "is_configured": true, 00:13:31.662 "data_offset": 2048, 00:13:31.662 "data_size": 63488 00:13:31.662 } 00:13:31.662 ] 00:13:31.662 }' 00:13:31.662 14:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.662 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.921 14:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.921 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.921 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.921 14:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:32.179 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.179 14:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:32.179 14:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:32.179 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.179 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.179 [2024-11-20 14:24:10.976458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:32.179 BaseBdev1 00:13:32.179 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.179 14:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:32.179 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:32.179 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:32.179 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:32.179 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:32.179 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:32.179 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:32.179 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.179 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.179 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.179 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:32.179 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.179 14:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.179 [ 00:13:32.179 { 00:13:32.179 "name": "BaseBdev1", 00:13:32.179 "aliases": [ 00:13:32.179 "a3c89588-b39c-4076-9b8d-975c2a4f9842" 00:13:32.179 ], 00:13:32.179 "product_name": "Malloc disk", 00:13:32.179 "block_size": 512, 00:13:32.179 "num_blocks": 65536, 00:13:32.179 "uuid": "a3c89588-b39c-4076-9b8d-975c2a4f9842", 00:13:32.179 "assigned_rate_limits": { 00:13:32.179 "rw_ios_per_sec": 0, 00:13:32.179 "rw_mbytes_per_sec": 0, 00:13:32.179 "r_mbytes_per_sec": 0, 00:13:32.179 "w_mbytes_per_sec": 0 00:13:32.179 }, 00:13:32.179 "claimed": true, 00:13:32.179 "claim_type": "exclusive_write", 00:13:32.179 "zoned": false, 00:13:32.179 "supported_io_types": { 00:13:32.179 "read": true, 00:13:32.179 "write": true, 00:13:32.179 "unmap": true, 00:13:32.179 "flush": true, 00:13:32.179 "reset": true, 00:13:32.179 "nvme_admin": false, 00:13:32.179 "nvme_io": false, 00:13:32.179 "nvme_io_md": false, 00:13:32.179 "write_zeroes": true, 00:13:32.179 "zcopy": true, 00:13:32.179 "get_zone_info": false, 00:13:32.179 "zone_management": false, 00:13:32.179 "zone_append": false, 00:13:32.179 "compare": false, 00:13:32.179 "compare_and_write": false, 00:13:32.179 "abort": true, 00:13:32.179 "seek_hole": false, 00:13:32.179 "seek_data": false, 00:13:32.179 "copy": true, 00:13:32.179 "nvme_iov_md": false 00:13:32.179 }, 00:13:32.179 "memory_domains": [ 00:13:32.179 { 00:13:32.179 "dma_device_id": "system", 00:13:32.179 "dma_device_type": 1 00:13:32.179 }, 00:13:32.179 { 00:13:32.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.179 "dma_device_type": 2 00:13:32.179 } 00:13:32.179 ], 00:13:32.180 "driver_specific": {} 00:13:32.180 } 00:13:32.180 ] 00:13:32.180 14:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.180 14:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:32.180 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:32.180 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.180 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:32.180 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.180 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.180 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:32.180 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.180 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.180 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.180 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.180 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.180 14:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.180 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.180 14:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.180 14:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.180 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.180 "name": "Existed_Raid", 00:13:32.180 "uuid": "de6e3928-99e8-41bf-b029-d4c2d0a7c270", 00:13:32.180 "strip_size_kb": 0, 00:13:32.180 "state": "configuring", 00:13:32.180 "raid_level": "raid1", 00:13:32.180 "superblock": true, 00:13:32.180 "num_base_bdevs": 4, 00:13:32.180 "num_base_bdevs_discovered": 3, 00:13:32.180 "num_base_bdevs_operational": 4, 00:13:32.180 "base_bdevs_list": [ 00:13:32.180 { 00:13:32.180 "name": "BaseBdev1", 00:13:32.180 "uuid": "a3c89588-b39c-4076-9b8d-975c2a4f9842", 00:13:32.180 "is_configured": true, 00:13:32.180 "data_offset": 2048, 00:13:32.180 "data_size": 63488 00:13:32.180 }, 00:13:32.180 { 00:13:32.180 "name": null, 00:13:32.180 "uuid": "dd4da7d4-1d25-43d7-a24d-dbe675bd8539", 00:13:32.180 "is_configured": false, 00:13:32.180 "data_offset": 0, 00:13:32.180 "data_size": 63488 00:13:32.180 }, 00:13:32.180 { 00:13:32.180 "name": "BaseBdev3", 00:13:32.180 "uuid": "40be4775-9a79-49dd-bbca-eadfcc23b13d", 00:13:32.180 "is_configured": true, 00:13:32.180 "data_offset": 2048, 00:13:32.180 "data_size": 63488 00:13:32.180 }, 00:13:32.180 { 00:13:32.180 "name": "BaseBdev4", 00:13:32.180 "uuid": "e03af4f3-07ab-4034-9b39-ad71627eab91", 00:13:32.180 "is_configured": true, 00:13:32.180 "data_offset": 2048, 00:13:32.180 "data_size": 63488 00:13:32.180 } 00:13:32.180 ] 00:13:32.180 }' 00:13:32.180 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.180 14:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.789 [2024-11-20 14:24:11.576721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.789 "name": "Existed_Raid", 00:13:32.789 "uuid": "de6e3928-99e8-41bf-b029-d4c2d0a7c270", 00:13:32.789 "strip_size_kb": 0, 00:13:32.789 "state": "configuring", 00:13:32.789 "raid_level": "raid1", 00:13:32.789 "superblock": true, 00:13:32.789 "num_base_bdevs": 4, 00:13:32.789 "num_base_bdevs_discovered": 2, 00:13:32.789 "num_base_bdevs_operational": 4, 00:13:32.789 "base_bdevs_list": [ 00:13:32.789 { 00:13:32.789 "name": "BaseBdev1", 00:13:32.789 "uuid": "a3c89588-b39c-4076-9b8d-975c2a4f9842", 00:13:32.789 "is_configured": true, 00:13:32.789 "data_offset": 2048, 00:13:32.789 "data_size": 63488 00:13:32.789 }, 00:13:32.789 { 00:13:32.789 "name": null, 00:13:32.789 "uuid": "dd4da7d4-1d25-43d7-a24d-dbe675bd8539", 00:13:32.789 "is_configured": false, 00:13:32.789 "data_offset": 0, 00:13:32.789 "data_size": 63488 00:13:32.789 }, 00:13:32.789 { 00:13:32.789 "name": null, 00:13:32.789 "uuid": "40be4775-9a79-49dd-bbca-eadfcc23b13d", 00:13:32.789 "is_configured": false, 00:13:32.789 "data_offset": 0, 00:13:32.789 "data_size": 63488 00:13:32.789 }, 00:13:32.789 { 00:13:32.789 "name": "BaseBdev4", 00:13:32.789 "uuid": "e03af4f3-07ab-4034-9b39-ad71627eab91", 00:13:32.789 "is_configured": true, 00:13:32.789 "data_offset": 2048, 00:13:32.789 "data_size": 63488 00:13:32.789 } 00:13:32.789 ] 00:13:32.789 }' 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.789 14:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.357 [2024-11-20 14:24:12.148865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.357 "name": "Existed_Raid", 00:13:33.357 "uuid": "de6e3928-99e8-41bf-b029-d4c2d0a7c270", 00:13:33.357 "strip_size_kb": 0, 00:13:33.357 "state": "configuring", 00:13:33.357 "raid_level": "raid1", 00:13:33.357 "superblock": true, 00:13:33.357 "num_base_bdevs": 4, 00:13:33.357 "num_base_bdevs_discovered": 3, 00:13:33.357 "num_base_bdevs_operational": 4, 00:13:33.357 "base_bdevs_list": [ 00:13:33.357 { 00:13:33.357 "name": "BaseBdev1", 00:13:33.357 "uuid": "a3c89588-b39c-4076-9b8d-975c2a4f9842", 00:13:33.357 "is_configured": true, 00:13:33.357 "data_offset": 2048, 00:13:33.357 "data_size": 63488 00:13:33.357 }, 00:13:33.357 { 00:13:33.357 "name": null, 00:13:33.357 "uuid": "dd4da7d4-1d25-43d7-a24d-dbe675bd8539", 00:13:33.357 "is_configured": false, 00:13:33.357 "data_offset": 0, 00:13:33.357 "data_size": 63488 00:13:33.357 }, 00:13:33.357 { 00:13:33.357 "name": "BaseBdev3", 00:13:33.357 "uuid": "40be4775-9a79-49dd-bbca-eadfcc23b13d", 00:13:33.357 "is_configured": true, 00:13:33.357 "data_offset": 2048, 00:13:33.357 "data_size": 63488 00:13:33.357 }, 00:13:33.357 { 00:13:33.357 "name": "BaseBdev4", 00:13:33.357 "uuid": "e03af4f3-07ab-4034-9b39-ad71627eab91", 00:13:33.357 "is_configured": true, 00:13:33.357 "data_offset": 2048, 00:13:33.357 "data_size": 63488 00:13:33.357 } 00:13:33.357 ] 00:13:33.357 }' 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.357 14:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.924 [2024-11-20 14:24:12.705076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.924 "name": "Existed_Raid", 00:13:33.924 "uuid": "de6e3928-99e8-41bf-b029-d4c2d0a7c270", 00:13:33.924 "strip_size_kb": 0, 00:13:33.924 "state": "configuring", 00:13:33.924 "raid_level": "raid1", 00:13:33.924 "superblock": true, 00:13:33.924 "num_base_bdevs": 4, 00:13:33.924 "num_base_bdevs_discovered": 2, 00:13:33.924 "num_base_bdevs_operational": 4, 00:13:33.924 "base_bdevs_list": [ 00:13:33.924 { 00:13:33.924 "name": null, 00:13:33.924 "uuid": "a3c89588-b39c-4076-9b8d-975c2a4f9842", 00:13:33.924 "is_configured": false, 00:13:33.924 "data_offset": 0, 00:13:33.924 "data_size": 63488 00:13:33.924 }, 00:13:33.924 { 00:13:33.924 "name": null, 00:13:33.924 "uuid": "dd4da7d4-1d25-43d7-a24d-dbe675bd8539", 00:13:33.924 "is_configured": false, 00:13:33.924 "data_offset": 0, 00:13:33.924 "data_size": 63488 00:13:33.924 }, 00:13:33.924 { 00:13:33.924 "name": "BaseBdev3", 00:13:33.924 "uuid": "40be4775-9a79-49dd-bbca-eadfcc23b13d", 00:13:33.924 "is_configured": true, 00:13:33.924 "data_offset": 2048, 00:13:33.924 "data_size": 63488 00:13:33.924 }, 00:13:33.924 { 00:13:33.924 "name": "BaseBdev4", 00:13:33.924 "uuid": "e03af4f3-07ab-4034-9b39-ad71627eab91", 00:13:33.924 "is_configured": true, 00:13:33.924 "data_offset": 2048, 00:13:33.924 "data_size": 63488 00:13:33.924 } 00:13:33.924 ] 00:13:33.924 }' 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.924 14:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.489 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.489 14:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.489 14:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.489 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:34.489 14:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.489 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:34.489 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:34.489 14:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.489 14:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.489 [2024-11-20 14:24:13.353941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:34.489 14:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.490 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:34.490 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.490 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.490 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.490 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.490 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.490 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.490 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.490 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.490 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.490 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.490 14:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.490 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.490 14:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.490 14:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.490 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.490 "name": "Existed_Raid", 00:13:34.490 "uuid": "de6e3928-99e8-41bf-b029-d4c2d0a7c270", 00:13:34.490 "strip_size_kb": 0, 00:13:34.490 "state": "configuring", 00:13:34.490 "raid_level": "raid1", 00:13:34.490 "superblock": true, 00:13:34.490 "num_base_bdevs": 4, 00:13:34.490 "num_base_bdevs_discovered": 3, 00:13:34.490 "num_base_bdevs_operational": 4, 00:13:34.490 "base_bdevs_list": [ 00:13:34.490 { 00:13:34.490 "name": null, 00:13:34.490 "uuid": "a3c89588-b39c-4076-9b8d-975c2a4f9842", 00:13:34.490 "is_configured": false, 00:13:34.490 "data_offset": 0, 00:13:34.490 "data_size": 63488 00:13:34.490 }, 00:13:34.490 { 00:13:34.490 "name": "BaseBdev2", 00:13:34.490 "uuid": "dd4da7d4-1d25-43d7-a24d-dbe675bd8539", 00:13:34.490 "is_configured": true, 00:13:34.490 "data_offset": 2048, 00:13:34.490 "data_size": 63488 00:13:34.490 }, 00:13:34.490 { 00:13:34.490 "name": "BaseBdev3", 00:13:34.490 "uuid": "40be4775-9a79-49dd-bbca-eadfcc23b13d", 00:13:34.490 "is_configured": true, 00:13:34.490 "data_offset": 2048, 00:13:34.490 "data_size": 63488 00:13:34.490 }, 00:13:34.490 { 00:13:34.490 "name": "BaseBdev4", 00:13:34.490 "uuid": "e03af4f3-07ab-4034-9b39-ad71627eab91", 00:13:34.490 "is_configured": true, 00:13:34.490 "data_offset": 2048, 00:13:34.490 "data_size": 63488 00:13:34.490 } 00:13:34.490 ] 00:13:34.490 }' 00:13:34.490 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.490 14:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.059 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:35.059 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.059 14:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.059 14:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.059 14:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.059 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:35.059 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.059 14:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.059 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:35.059 14:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.059 14:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.059 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a3c89588-b39c-4076-9b8d-975c2a4f9842 00:13:35.059 14:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.059 14:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.059 [2024-11-20 14:24:13.999606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:35.059 [2024-11-20 14:24:14.000094] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:35.059 [2024-11-20 14:24:14.000127] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:35.059 NewBaseBdev 00:13:35.059 [2024-11-20 14:24:14.000452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:35.059 [2024-11-20 14:24:14.000644] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:35.059 [2024-11-20 14:24:14.000667] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:35.059 [2024-11-20 14:24:14.000830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.059 14:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.059 14:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.059 [ 00:13:35.059 { 00:13:35.059 "name": "NewBaseBdev", 00:13:35.059 "aliases": [ 00:13:35.059 "a3c89588-b39c-4076-9b8d-975c2a4f9842" 00:13:35.059 ], 00:13:35.059 "product_name": "Malloc disk", 00:13:35.059 "block_size": 512, 00:13:35.059 "num_blocks": 65536, 00:13:35.059 "uuid": "a3c89588-b39c-4076-9b8d-975c2a4f9842", 00:13:35.059 "assigned_rate_limits": { 00:13:35.059 "rw_ios_per_sec": 0, 00:13:35.059 "rw_mbytes_per_sec": 0, 00:13:35.059 "r_mbytes_per_sec": 0, 00:13:35.059 "w_mbytes_per_sec": 0 00:13:35.059 }, 00:13:35.059 "claimed": true, 00:13:35.059 "claim_type": "exclusive_write", 00:13:35.059 "zoned": false, 00:13:35.059 "supported_io_types": { 00:13:35.059 "read": true, 00:13:35.059 "write": true, 00:13:35.059 "unmap": true, 00:13:35.059 "flush": true, 00:13:35.059 "reset": true, 00:13:35.059 "nvme_admin": false, 00:13:35.059 "nvme_io": false, 00:13:35.059 "nvme_io_md": false, 00:13:35.059 "write_zeroes": true, 00:13:35.059 "zcopy": true, 00:13:35.059 "get_zone_info": false, 00:13:35.059 "zone_management": false, 00:13:35.059 "zone_append": false, 00:13:35.059 "compare": false, 00:13:35.059 "compare_and_write": false, 00:13:35.059 "abort": true, 00:13:35.059 "seek_hole": false, 00:13:35.059 "seek_data": false, 00:13:35.059 "copy": true, 00:13:35.059 "nvme_iov_md": false 00:13:35.059 }, 00:13:35.059 "memory_domains": [ 00:13:35.059 { 00:13:35.059 "dma_device_id": "system", 00:13:35.059 "dma_device_type": 1 00:13:35.059 }, 00:13:35.059 { 00:13:35.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.059 "dma_device_type": 2 00:13:35.059 } 00:13:35.059 ], 00:13:35.059 "driver_specific": {} 00:13:35.059 } 00:13:35.059 ] 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.059 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.318 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.318 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.318 "name": "Existed_Raid", 00:13:35.318 "uuid": "de6e3928-99e8-41bf-b029-d4c2d0a7c270", 00:13:35.318 "strip_size_kb": 0, 00:13:35.318 "state": "online", 00:13:35.318 "raid_level": "raid1", 00:13:35.318 "superblock": true, 00:13:35.318 "num_base_bdevs": 4, 00:13:35.318 "num_base_bdevs_discovered": 4, 00:13:35.318 "num_base_bdevs_operational": 4, 00:13:35.318 "base_bdevs_list": [ 00:13:35.318 { 00:13:35.318 "name": "NewBaseBdev", 00:13:35.318 "uuid": "a3c89588-b39c-4076-9b8d-975c2a4f9842", 00:13:35.318 "is_configured": true, 00:13:35.318 "data_offset": 2048, 00:13:35.318 "data_size": 63488 00:13:35.318 }, 00:13:35.318 { 00:13:35.318 "name": "BaseBdev2", 00:13:35.318 "uuid": "dd4da7d4-1d25-43d7-a24d-dbe675bd8539", 00:13:35.318 "is_configured": true, 00:13:35.318 "data_offset": 2048, 00:13:35.318 "data_size": 63488 00:13:35.318 }, 00:13:35.318 { 00:13:35.318 "name": "BaseBdev3", 00:13:35.318 "uuid": "40be4775-9a79-49dd-bbca-eadfcc23b13d", 00:13:35.318 "is_configured": true, 00:13:35.318 "data_offset": 2048, 00:13:35.318 "data_size": 63488 00:13:35.318 }, 00:13:35.318 { 00:13:35.318 "name": "BaseBdev4", 00:13:35.318 "uuid": "e03af4f3-07ab-4034-9b39-ad71627eab91", 00:13:35.318 "is_configured": true, 00:13:35.318 "data_offset": 2048, 00:13:35.318 "data_size": 63488 00:13:35.318 } 00:13:35.318 ] 00:13:35.318 }' 00:13:35.318 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.319 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.886 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:35.886 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:35.886 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:35.886 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:35.886 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:35.886 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:35.886 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:35.886 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.886 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.886 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:35.886 [2024-11-20 14:24:14.572265] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.886 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.886 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:35.886 "name": "Existed_Raid", 00:13:35.886 "aliases": [ 00:13:35.886 "de6e3928-99e8-41bf-b029-d4c2d0a7c270" 00:13:35.886 ], 00:13:35.886 "product_name": "Raid Volume", 00:13:35.886 "block_size": 512, 00:13:35.886 "num_blocks": 63488, 00:13:35.886 "uuid": "de6e3928-99e8-41bf-b029-d4c2d0a7c270", 00:13:35.886 "assigned_rate_limits": { 00:13:35.886 "rw_ios_per_sec": 0, 00:13:35.886 "rw_mbytes_per_sec": 0, 00:13:35.886 "r_mbytes_per_sec": 0, 00:13:35.886 "w_mbytes_per_sec": 0 00:13:35.886 }, 00:13:35.886 "claimed": false, 00:13:35.886 "zoned": false, 00:13:35.886 "supported_io_types": { 00:13:35.886 "read": true, 00:13:35.886 "write": true, 00:13:35.886 "unmap": false, 00:13:35.886 "flush": false, 00:13:35.886 "reset": true, 00:13:35.886 "nvme_admin": false, 00:13:35.886 "nvme_io": false, 00:13:35.886 "nvme_io_md": false, 00:13:35.886 "write_zeroes": true, 00:13:35.886 "zcopy": false, 00:13:35.886 "get_zone_info": false, 00:13:35.886 "zone_management": false, 00:13:35.886 "zone_append": false, 00:13:35.886 "compare": false, 00:13:35.886 "compare_and_write": false, 00:13:35.886 "abort": false, 00:13:35.886 "seek_hole": false, 00:13:35.886 "seek_data": false, 00:13:35.886 "copy": false, 00:13:35.886 "nvme_iov_md": false 00:13:35.886 }, 00:13:35.886 "memory_domains": [ 00:13:35.886 { 00:13:35.887 "dma_device_id": "system", 00:13:35.887 "dma_device_type": 1 00:13:35.887 }, 00:13:35.887 { 00:13:35.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.887 "dma_device_type": 2 00:13:35.887 }, 00:13:35.887 { 00:13:35.887 "dma_device_id": "system", 00:13:35.887 "dma_device_type": 1 00:13:35.887 }, 00:13:35.887 { 00:13:35.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.887 "dma_device_type": 2 00:13:35.887 }, 00:13:35.887 { 00:13:35.887 "dma_device_id": "system", 00:13:35.887 "dma_device_type": 1 00:13:35.887 }, 00:13:35.887 { 00:13:35.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.887 "dma_device_type": 2 00:13:35.887 }, 00:13:35.887 { 00:13:35.887 "dma_device_id": "system", 00:13:35.887 "dma_device_type": 1 00:13:35.887 }, 00:13:35.887 { 00:13:35.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.887 "dma_device_type": 2 00:13:35.887 } 00:13:35.887 ], 00:13:35.887 "driver_specific": { 00:13:35.887 "raid": { 00:13:35.887 "uuid": "de6e3928-99e8-41bf-b029-d4c2d0a7c270", 00:13:35.887 "strip_size_kb": 0, 00:13:35.887 "state": "online", 00:13:35.887 "raid_level": "raid1", 00:13:35.887 "superblock": true, 00:13:35.887 "num_base_bdevs": 4, 00:13:35.887 "num_base_bdevs_discovered": 4, 00:13:35.887 "num_base_bdevs_operational": 4, 00:13:35.887 "base_bdevs_list": [ 00:13:35.887 { 00:13:35.887 "name": "NewBaseBdev", 00:13:35.887 "uuid": "a3c89588-b39c-4076-9b8d-975c2a4f9842", 00:13:35.887 "is_configured": true, 00:13:35.887 "data_offset": 2048, 00:13:35.887 "data_size": 63488 00:13:35.887 }, 00:13:35.887 { 00:13:35.887 "name": "BaseBdev2", 00:13:35.887 "uuid": "dd4da7d4-1d25-43d7-a24d-dbe675bd8539", 00:13:35.887 "is_configured": true, 00:13:35.887 "data_offset": 2048, 00:13:35.887 "data_size": 63488 00:13:35.887 }, 00:13:35.887 { 00:13:35.887 "name": "BaseBdev3", 00:13:35.887 "uuid": "40be4775-9a79-49dd-bbca-eadfcc23b13d", 00:13:35.887 "is_configured": true, 00:13:35.887 "data_offset": 2048, 00:13:35.887 "data_size": 63488 00:13:35.887 }, 00:13:35.887 { 00:13:35.887 "name": "BaseBdev4", 00:13:35.887 "uuid": "e03af4f3-07ab-4034-9b39-ad71627eab91", 00:13:35.887 "is_configured": true, 00:13:35.887 "data_offset": 2048, 00:13:35.887 "data_size": 63488 00:13:35.887 } 00:13:35.887 ] 00:13:35.887 } 00:13:35.887 } 00:13:35.887 }' 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:35.887 BaseBdev2 00:13:35.887 BaseBdev3 00:13:35.887 BaseBdev4' 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.887 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.146 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.146 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.146 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.146 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.146 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:36.146 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.146 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.146 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.146 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.146 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.146 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:36.146 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.146 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.146 [2024-11-20 14:24:14.967921] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:36.146 [2024-11-20 14:24:14.967955] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:36.146 [2024-11-20 14:24:14.968042] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:36.146 [2024-11-20 14:24:14.968420] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:36.146 [2024-11-20 14:24:14.968445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:36.146 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.146 14:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74009 00:13:36.146 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74009 ']' 00:13:36.146 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74009 00:13:36.146 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:36.146 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:36.146 14:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74009 00:13:36.146 killing process with pid 74009 00:13:36.146 14:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:36.146 14:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:36.146 14:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74009' 00:13:36.146 14:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74009 00:13:36.146 [2024-11-20 14:24:15.003267] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:36.146 14:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74009 00:13:36.405 [2024-11-20 14:24:15.359244] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:37.782 14:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:37.782 00:13:37.782 real 0m13.003s 00:13:37.782 user 0m21.647s 00:13:37.782 sys 0m1.771s 00:13:37.782 ************************************ 00:13:37.782 END TEST raid_state_function_test_sb 00:13:37.782 ************************************ 00:13:37.782 14:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.782 14:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.782 14:24:16 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:13:37.782 14:24:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:37.782 14:24:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.782 14:24:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:37.782 ************************************ 00:13:37.782 START TEST raid_superblock_test 00:13:37.782 ************************************ 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:37.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74693 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74693 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74693 ']' 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.782 14:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.782 [2024-11-20 14:24:16.553086] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:13:37.782 [2024-11-20 14:24:16.553244] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74693 ] 00:13:37.782 [2024-11-20 14:24:16.735965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.041 [2024-11-20 14:24:16.895456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.299 [2024-11-20 14:24:17.108676] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.299 [2024-11-20 14:24:17.108770] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.558 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:38.558 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:38.558 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:38.558 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:38.558 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:38.558 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:38.558 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:38.558 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:38.558 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:38.558 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:38.558 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:38.558 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.558 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.816 malloc1 00:13:38.816 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.816 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:38.816 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.816 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.816 [2024-11-20 14:24:17.584780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:38.816 [2024-11-20 14:24:17.584858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.816 [2024-11-20 14:24:17.584898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:38.816 [2024-11-20 14:24:17.584913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.816 [2024-11-20 14:24:17.587800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.816 [2024-11-20 14:24:17.587847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:38.816 pt1 00:13:38.816 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.816 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.817 malloc2 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.817 [2024-11-20 14:24:17.637229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:38.817 [2024-11-20 14:24:17.637300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.817 [2024-11-20 14:24:17.637339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:38.817 [2024-11-20 14:24:17.637354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.817 [2024-11-20 14:24:17.640114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.817 [2024-11-20 14:24:17.640158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:38.817 pt2 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.817 malloc3 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.817 [2024-11-20 14:24:17.701681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:38.817 [2024-11-20 14:24:17.701751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.817 [2024-11-20 14:24:17.701786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:38.817 [2024-11-20 14:24:17.701801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.817 [2024-11-20 14:24:17.704589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.817 [2024-11-20 14:24:17.704635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:38.817 pt3 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.817 malloc4 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.817 [2024-11-20 14:24:17.755416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:38.817 [2024-11-20 14:24:17.755493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.817 [2024-11-20 14:24:17.755525] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:38.817 [2024-11-20 14:24:17.755541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.817 [2024-11-20 14:24:17.758389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.817 [2024-11-20 14:24:17.758572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:38.817 pt4 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.817 [2024-11-20 14:24:17.763470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:38.817 [2024-11-20 14:24:17.765904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:38.817 [2024-11-20 14:24:17.766180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:38.817 [2024-11-20 14:24:17.766292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:38.817 [2024-11-20 14:24:17.766550] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:38.817 [2024-11-20 14:24:17.766574] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:38.817 [2024-11-20 14:24:17.766893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:38.817 [2024-11-20 14:24:17.767193] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:38.817 [2024-11-20 14:24:17.767235] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:38.817 [2024-11-20 14:24:17.767521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.817 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.076 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.076 "name": "raid_bdev1", 00:13:39.076 "uuid": "3ea1bb90-8440-4191-8097-99e6d965aa00", 00:13:39.076 "strip_size_kb": 0, 00:13:39.076 "state": "online", 00:13:39.076 "raid_level": "raid1", 00:13:39.076 "superblock": true, 00:13:39.076 "num_base_bdevs": 4, 00:13:39.076 "num_base_bdevs_discovered": 4, 00:13:39.076 "num_base_bdevs_operational": 4, 00:13:39.076 "base_bdevs_list": [ 00:13:39.076 { 00:13:39.076 "name": "pt1", 00:13:39.076 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:39.076 "is_configured": true, 00:13:39.076 "data_offset": 2048, 00:13:39.076 "data_size": 63488 00:13:39.076 }, 00:13:39.076 { 00:13:39.076 "name": "pt2", 00:13:39.076 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:39.076 "is_configured": true, 00:13:39.076 "data_offset": 2048, 00:13:39.076 "data_size": 63488 00:13:39.076 }, 00:13:39.076 { 00:13:39.076 "name": "pt3", 00:13:39.076 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:39.076 "is_configured": true, 00:13:39.076 "data_offset": 2048, 00:13:39.076 "data_size": 63488 00:13:39.076 }, 00:13:39.076 { 00:13:39.076 "name": "pt4", 00:13:39.076 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:39.076 "is_configured": true, 00:13:39.076 "data_offset": 2048, 00:13:39.076 "data_size": 63488 00:13:39.076 } 00:13:39.076 ] 00:13:39.076 }' 00:13:39.076 14:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.076 14:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.335 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:39.335 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:39.335 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:39.335 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:39.335 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:39.335 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:39.335 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:39.335 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.335 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.335 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:39.335 [2024-11-20 14:24:18.288126] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.335 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.593 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:39.593 "name": "raid_bdev1", 00:13:39.593 "aliases": [ 00:13:39.593 "3ea1bb90-8440-4191-8097-99e6d965aa00" 00:13:39.593 ], 00:13:39.593 "product_name": "Raid Volume", 00:13:39.593 "block_size": 512, 00:13:39.593 "num_blocks": 63488, 00:13:39.593 "uuid": "3ea1bb90-8440-4191-8097-99e6d965aa00", 00:13:39.593 "assigned_rate_limits": { 00:13:39.593 "rw_ios_per_sec": 0, 00:13:39.593 "rw_mbytes_per_sec": 0, 00:13:39.593 "r_mbytes_per_sec": 0, 00:13:39.593 "w_mbytes_per_sec": 0 00:13:39.593 }, 00:13:39.593 "claimed": false, 00:13:39.593 "zoned": false, 00:13:39.593 "supported_io_types": { 00:13:39.593 "read": true, 00:13:39.593 "write": true, 00:13:39.593 "unmap": false, 00:13:39.593 "flush": false, 00:13:39.593 "reset": true, 00:13:39.594 "nvme_admin": false, 00:13:39.594 "nvme_io": false, 00:13:39.594 "nvme_io_md": false, 00:13:39.594 "write_zeroes": true, 00:13:39.594 "zcopy": false, 00:13:39.594 "get_zone_info": false, 00:13:39.594 "zone_management": false, 00:13:39.594 "zone_append": false, 00:13:39.594 "compare": false, 00:13:39.594 "compare_and_write": false, 00:13:39.594 "abort": false, 00:13:39.594 "seek_hole": false, 00:13:39.594 "seek_data": false, 00:13:39.594 "copy": false, 00:13:39.594 "nvme_iov_md": false 00:13:39.594 }, 00:13:39.594 "memory_domains": [ 00:13:39.594 { 00:13:39.594 "dma_device_id": "system", 00:13:39.594 "dma_device_type": 1 00:13:39.594 }, 00:13:39.594 { 00:13:39.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.594 "dma_device_type": 2 00:13:39.594 }, 00:13:39.594 { 00:13:39.594 "dma_device_id": "system", 00:13:39.594 "dma_device_type": 1 00:13:39.594 }, 00:13:39.594 { 00:13:39.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.594 "dma_device_type": 2 00:13:39.594 }, 00:13:39.594 { 00:13:39.594 "dma_device_id": "system", 00:13:39.594 "dma_device_type": 1 00:13:39.594 }, 00:13:39.594 { 00:13:39.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.594 "dma_device_type": 2 00:13:39.594 }, 00:13:39.594 { 00:13:39.594 "dma_device_id": "system", 00:13:39.594 "dma_device_type": 1 00:13:39.594 }, 00:13:39.594 { 00:13:39.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.594 "dma_device_type": 2 00:13:39.594 } 00:13:39.594 ], 00:13:39.594 "driver_specific": { 00:13:39.594 "raid": { 00:13:39.594 "uuid": "3ea1bb90-8440-4191-8097-99e6d965aa00", 00:13:39.594 "strip_size_kb": 0, 00:13:39.594 "state": "online", 00:13:39.594 "raid_level": "raid1", 00:13:39.594 "superblock": true, 00:13:39.594 "num_base_bdevs": 4, 00:13:39.594 "num_base_bdevs_discovered": 4, 00:13:39.594 "num_base_bdevs_operational": 4, 00:13:39.594 "base_bdevs_list": [ 00:13:39.594 { 00:13:39.594 "name": "pt1", 00:13:39.594 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:39.594 "is_configured": true, 00:13:39.594 "data_offset": 2048, 00:13:39.594 "data_size": 63488 00:13:39.594 }, 00:13:39.594 { 00:13:39.594 "name": "pt2", 00:13:39.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:39.594 "is_configured": true, 00:13:39.594 "data_offset": 2048, 00:13:39.594 "data_size": 63488 00:13:39.594 }, 00:13:39.594 { 00:13:39.594 "name": "pt3", 00:13:39.594 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:39.594 "is_configured": true, 00:13:39.594 "data_offset": 2048, 00:13:39.594 "data_size": 63488 00:13:39.594 }, 00:13:39.594 { 00:13:39.594 "name": "pt4", 00:13:39.594 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:39.594 "is_configured": true, 00:13:39.594 "data_offset": 2048, 00:13:39.594 "data_size": 63488 00:13:39.594 } 00:13:39.594 ] 00:13:39.594 } 00:13:39.594 } 00:13:39.594 }' 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:39.594 pt2 00:13:39.594 pt3 00:13:39.594 pt4' 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.594 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:39.853 [2024-11-20 14:24:18.644127] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3ea1bb90-8440-4191-8097-99e6d965aa00 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3ea1bb90-8440-4191-8097-99e6d965aa00 ']' 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.853 [2024-11-20 14:24:18.699770] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:39.853 [2024-11-20 14:24:18.699918] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:39.853 [2024-11-20 14:24:18.700139] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:39.853 [2024-11-20 14:24:18.700362] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:39.853 [2024-11-20 14:24:18.700517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.853 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.854 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:39.854 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:39.854 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.854 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.854 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.854 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:39.854 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:39.854 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.854 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.112 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.112 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:40.112 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:40.112 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:40.112 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:40.112 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:40.112 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.112 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:40.112 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.112 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:40.112 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.112 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.112 [2024-11-20 14:24:18.855821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:40.112 [2024-11-20 14:24:18.858296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:40.112 [2024-11-20 14:24:18.858365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:40.112 [2024-11-20 14:24:18.858425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:40.112 [2024-11-20 14:24:18.858501] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:40.112 [2024-11-20 14:24:18.858577] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:40.112 [2024-11-20 14:24:18.858611] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:40.112 [2024-11-20 14:24:18.858642] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:40.112 [2024-11-20 14:24:18.858663] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:40.112 [2024-11-20 14:24:18.858679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:40.112 request: 00:13:40.112 { 00:13:40.112 "name": "raid_bdev1", 00:13:40.112 "raid_level": "raid1", 00:13:40.112 "base_bdevs": [ 00:13:40.112 "malloc1", 00:13:40.113 "malloc2", 00:13:40.113 "malloc3", 00:13:40.113 "malloc4" 00:13:40.113 ], 00:13:40.113 "superblock": false, 00:13:40.113 "method": "bdev_raid_create", 00:13:40.113 "req_id": 1 00:13:40.113 } 00:13:40.113 Got JSON-RPC error response 00:13:40.113 response: 00:13:40.113 { 00:13:40.113 "code": -17, 00:13:40.113 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:40.113 } 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.113 [2024-11-20 14:24:18.915815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:40.113 [2024-11-20 14:24:18.916040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.113 [2024-11-20 14:24:18.916208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:40.113 [2024-11-20 14:24:18.916333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.113 [2024-11-20 14:24:18.919311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.113 [2024-11-20 14:24:18.919476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:40.113 [2024-11-20 14:24:18.919690] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:40.113 [2024-11-20 14:24:18.919886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:40.113 pt1 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.113 "name": "raid_bdev1", 00:13:40.113 "uuid": "3ea1bb90-8440-4191-8097-99e6d965aa00", 00:13:40.113 "strip_size_kb": 0, 00:13:40.113 "state": "configuring", 00:13:40.113 "raid_level": "raid1", 00:13:40.113 "superblock": true, 00:13:40.113 "num_base_bdevs": 4, 00:13:40.113 "num_base_bdevs_discovered": 1, 00:13:40.113 "num_base_bdevs_operational": 4, 00:13:40.113 "base_bdevs_list": [ 00:13:40.113 { 00:13:40.113 "name": "pt1", 00:13:40.113 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:40.113 "is_configured": true, 00:13:40.113 "data_offset": 2048, 00:13:40.113 "data_size": 63488 00:13:40.113 }, 00:13:40.113 { 00:13:40.113 "name": null, 00:13:40.113 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:40.113 "is_configured": false, 00:13:40.113 "data_offset": 2048, 00:13:40.113 "data_size": 63488 00:13:40.113 }, 00:13:40.113 { 00:13:40.113 "name": null, 00:13:40.113 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:40.113 "is_configured": false, 00:13:40.113 "data_offset": 2048, 00:13:40.113 "data_size": 63488 00:13:40.113 }, 00:13:40.113 { 00:13:40.113 "name": null, 00:13:40.113 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:40.113 "is_configured": false, 00:13:40.113 "data_offset": 2048, 00:13:40.113 "data_size": 63488 00:13:40.113 } 00:13:40.113 ] 00:13:40.113 }' 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.113 14:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.680 [2024-11-20 14:24:19.428408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:40.680 [2024-11-20 14:24:19.428502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.680 [2024-11-20 14:24:19.428534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:40.680 [2024-11-20 14:24:19.428552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.680 [2024-11-20 14:24:19.429108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.680 [2024-11-20 14:24:19.429137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:40.680 [2024-11-20 14:24:19.429234] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:40.680 [2024-11-20 14:24:19.429271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:40.680 pt2 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.680 [2024-11-20 14:24:19.436398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.680 "name": "raid_bdev1", 00:13:40.680 "uuid": "3ea1bb90-8440-4191-8097-99e6d965aa00", 00:13:40.680 "strip_size_kb": 0, 00:13:40.680 "state": "configuring", 00:13:40.680 "raid_level": "raid1", 00:13:40.680 "superblock": true, 00:13:40.680 "num_base_bdevs": 4, 00:13:40.680 "num_base_bdevs_discovered": 1, 00:13:40.680 "num_base_bdevs_operational": 4, 00:13:40.680 "base_bdevs_list": [ 00:13:40.680 { 00:13:40.680 "name": "pt1", 00:13:40.680 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:40.680 "is_configured": true, 00:13:40.680 "data_offset": 2048, 00:13:40.680 "data_size": 63488 00:13:40.680 }, 00:13:40.680 { 00:13:40.680 "name": null, 00:13:40.680 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:40.680 "is_configured": false, 00:13:40.680 "data_offset": 0, 00:13:40.680 "data_size": 63488 00:13:40.680 }, 00:13:40.680 { 00:13:40.680 "name": null, 00:13:40.680 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:40.680 "is_configured": false, 00:13:40.680 "data_offset": 2048, 00:13:40.680 "data_size": 63488 00:13:40.680 }, 00:13:40.680 { 00:13:40.680 "name": null, 00:13:40.680 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:40.680 "is_configured": false, 00:13:40.680 "data_offset": 2048, 00:13:40.680 "data_size": 63488 00:13:40.680 } 00:13:40.680 ] 00:13:40.680 }' 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.680 14:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.247 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:41.247 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:41.247 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:41.247 14:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.247 14:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.247 [2024-11-20 14:24:19.968524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:41.247 [2024-11-20 14:24:19.968605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.247 [2024-11-20 14:24:19.968636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:41.247 [2024-11-20 14:24:19.968651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.247 [2024-11-20 14:24:19.969237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.247 [2024-11-20 14:24:19.969264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:41.247 [2024-11-20 14:24:19.969373] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:41.247 [2024-11-20 14:24:19.969404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:41.247 pt2 00:13:41.247 14:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.247 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:41.247 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:41.247 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:41.247 14:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.247 14:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.247 [2024-11-20 14:24:19.976492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:41.247 [2024-11-20 14:24:19.976548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.247 [2024-11-20 14:24:19.976576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:41.247 [2024-11-20 14:24:19.976590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.247 [2024-11-20 14:24:19.977038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.247 [2024-11-20 14:24:19.977068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:41.247 [2024-11-20 14:24:19.977157] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:41.247 [2024-11-20 14:24:19.977191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:41.247 pt3 00:13:41.247 14:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.247 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:41.247 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:41.247 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:41.247 14:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.247 14:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.247 [2024-11-20 14:24:19.984469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:41.247 [2024-11-20 14:24:19.984522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.247 [2024-11-20 14:24:19.984548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:41.248 [2024-11-20 14:24:19.984561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.248 [2024-11-20 14:24:19.985017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.248 [2024-11-20 14:24:19.985047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:41.248 [2024-11-20 14:24:19.985141] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:41.248 [2024-11-20 14:24:19.985176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:41.248 [2024-11-20 14:24:19.985351] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:41.248 [2024-11-20 14:24:19.985374] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:41.248 [2024-11-20 14:24:19.985690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:41.248 [2024-11-20 14:24:19.985889] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:41.248 [2024-11-20 14:24:19.985910] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:41.248 [2024-11-20 14:24:19.986102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.248 pt4 00:13:41.248 14:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.248 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:41.248 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:41.248 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:41.248 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.248 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.248 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.248 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.248 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.248 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.248 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.248 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.248 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.248 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.248 14:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.248 14:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.248 14:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.248 14:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.248 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.248 "name": "raid_bdev1", 00:13:41.248 "uuid": "3ea1bb90-8440-4191-8097-99e6d965aa00", 00:13:41.248 "strip_size_kb": 0, 00:13:41.248 "state": "online", 00:13:41.248 "raid_level": "raid1", 00:13:41.248 "superblock": true, 00:13:41.248 "num_base_bdevs": 4, 00:13:41.248 "num_base_bdevs_discovered": 4, 00:13:41.248 "num_base_bdevs_operational": 4, 00:13:41.248 "base_bdevs_list": [ 00:13:41.248 { 00:13:41.248 "name": "pt1", 00:13:41.248 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:41.248 "is_configured": true, 00:13:41.248 "data_offset": 2048, 00:13:41.248 "data_size": 63488 00:13:41.248 }, 00:13:41.248 { 00:13:41.248 "name": "pt2", 00:13:41.248 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:41.248 "is_configured": true, 00:13:41.248 "data_offset": 2048, 00:13:41.248 "data_size": 63488 00:13:41.248 }, 00:13:41.248 { 00:13:41.248 "name": "pt3", 00:13:41.248 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:41.248 "is_configured": true, 00:13:41.248 "data_offset": 2048, 00:13:41.248 "data_size": 63488 00:13:41.248 }, 00:13:41.248 { 00:13:41.248 "name": "pt4", 00:13:41.248 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:41.248 "is_configured": true, 00:13:41.248 "data_offset": 2048, 00:13:41.248 "data_size": 63488 00:13:41.248 } 00:13:41.248 ] 00:13:41.248 }' 00:13:41.248 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.248 14:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.856 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:41.856 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:41.856 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:41.856 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:41.856 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.857 [2024-11-20 14:24:20.533100] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:41.857 "name": "raid_bdev1", 00:13:41.857 "aliases": [ 00:13:41.857 "3ea1bb90-8440-4191-8097-99e6d965aa00" 00:13:41.857 ], 00:13:41.857 "product_name": "Raid Volume", 00:13:41.857 "block_size": 512, 00:13:41.857 "num_blocks": 63488, 00:13:41.857 "uuid": "3ea1bb90-8440-4191-8097-99e6d965aa00", 00:13:41.857 "assigned_rate_limits": { 00:13:41.857 "rw_ios_per_sec": 0, 00:13:41.857 "rw_mbytes_per_sec": 0, 00:13:41.857 "r_mbytes_per_sec": 0, 00:13:41.857 "w_mbytes_per_sec": 0 00:13:41.857 }, 00:13:41.857 "claimed": false, 00:13:41.857 "zoned": false, 00:13:41.857 "supported_io_types": { 00:13:41.857 "read": true, 00:13:41.857 "write": true, 00:13:41.857 "unmap": false, 00:13:41.857 "flush": false, 00:13:41.857 "reset": true, 00:13:41.857 "nvme_admin": false, 00:13:41.857 "nvme_io": false, 00:13:41.857 "nvme_io_md": false, 00:13:41.857 "write_zeroes": true, 00:13:41.857 "zcopy": false, 00:13:41.857 "get_zone_info": false, 00:13:41.857 "zone_management": false, 00:13:41.857 "zone_append": false, 00:13:41.857 "compare": false, 00:13:41.857 "compare_and_write": false, 00:13:41.857 "abort": false, 00:13:41.857 "seek_hole": false, 00:13:41.857 "seek_data": false, 00:13:41.857 "copy": false, 00:13:41.857 "nvme_iov_md": false 00:13:41.857 }, 00:13:41.857 "memory_domains": [ 00:13:41.857 { 00:13:41.857 "dma_device_id": "system", 00:13:41.857 "dma_device_type": 1 00:13:41.857 }, 00:13:41.857 { 00:13:41.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.857 "dma_device_type": 2 00:13:41.857 }, 00:13:41.857 { 00:13:41.857 "dma_device_id": "system", 00:13:41.857 "dma_device_type": 1 00:13:41.857 }, 00:13:41.857 { 00:13:41.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.857 "dma_device_type": 2 00:13:41.857 }, 00:13:41.857 { 00:13:41.857 "dma_device_id": "system", 00:13:41.857 "dma_device_type": 1 00:13:41.857 }, 00:13:41.857 { 00:13:41.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.857 "dma_device_type": 2 00:13:41.857 }, 00:13:41.857 { 00:13:41.857 "dma_device_id": "system", 00:13:41.857 "dma_device_type": 1 00:13:41.857 }, 00:13:41.857 { 00:13:41.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.857 "dma_device_type": 2 00:13:41.857 } 00:13:41.857 ], 00:13:41.857 "driver_specific": { 00:13:41.857 "raid": { 00:13:41.857 "uuid": "3ea1bb90-8440-4191-8097-99e6d965aa00", 00:13:41.857 "strip_size_kb": 0, 00:13:41.857 "state": "online", 00:13:41.857 "raid_level": "raid1", 00:13:41.857 "superblock": true, 00:13:41.857 "num_base_bdevs": 4, 00:13:41.857 "num_base_bdevs_discovered": 4, 00:13:41.857 "num_base_bdevs_operational": 4, 00:13:41.857 "base_bdevs_list": [ 00:13:41.857 { 00:13:41.857 "name": "pt1", 00:13:41.857 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:41.857 "is_configured": true, 00:13:41.857 "data_offset": 2048, 00:13:41.857 "data_size": 63488 00:13:41.857 }, 00:13:41.857 { 00:13:41.857 "name": "pt2", 00:13:41.857 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:41.857 "is_configured": true, 00:13:41.857 "data_offset": 2048, 00:13:41.857 "data_size": 63488 00:13:41.857 }, 00:13:41.857 { 00:13:41.857 "name": "pt3", 00:13:41.857 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:41.857 "is_configured": true, 00:13:41.857 "data_offset": 2048, 00:13:41.857 "data_size": 63488 00:13:41.857 }, 00:13:41.857 { 00:13:41.857 "name": "pt4", 00:13:41.857 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:41.857 "is_configured": true, 00:13:41.857 "data_offset": 2048, 00:13:41.857 "data_size": 63488 00:13:41.857 } 00:13:41.857 ] 00:13:41.857 } 00:13:41.857 } 00:13:41.857 }' 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:41.857 pt2 00:13:41.857 pt3 00:13:41.857 pt4' 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.857 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.116 14:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.116 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.116 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.116 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.116 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:42.116 14:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.116 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.116 14:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.116 14:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.116 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.116 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.117 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:42.117 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:42.117 14:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.117 14:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.117 [2024-11-20 14:24:20.937182] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.117 14:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.117 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3ea1bb90-8440-4191-8097-99e6d965aa00 '!=' 3ea1bb90-8440-4191-8097-99e6d965aa00 ']' 00:13:42.117 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:42.117 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:42.117 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:42.117 14:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:42.117 14:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.117 14:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.117 [2024-11-20 14:24:20.996878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:42.117 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.117 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:42.117 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.117 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.117 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.117 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.117 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.117 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.117 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.117 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.117 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.117 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.117 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.117 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.117 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.117 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.117 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.117 "name": "raid_bdev1", 00:13:42.117 "uuid": "3ea1bb90-8440-4191-8097-99e6d965aa00", 00:13:42.117 "strip_size_kb": 0, 00:13:42.117 "state": "online", 00:13:42.117 "raid_level": "raid1", 00:13:42.117 "superblock": true, 00:13:42.117 "num_base_bdevs": 4, 00:13:42.117 "num_base_bdevs_discovered": 3, 00:13:42.117 "num_base_bdevs_operational": 3, 00:13:42.117 "base_bdevs_list": [ 00:13:42.117 { 00:13:42.117 "name": null, 00:13:42.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.117 "is_configured": false, 00:13:42.117 "data_offset": 0, 00:13:42.117 "data_size": 63488 00:13:42.117 }, 00:13:42.117 { 00:13:42.117 "name": "pt2", 00:13:42.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.117 "is_configured": true, 00:13:42.117 "data_offset": 2048, 00:13:42.117 "data_size": 63488 00:13:42.117 }, 00:13:42.117 { 00:13:42.117 "name": "pt3", 00:13:42.117 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:42.117 "is_configured": true, 00:13:42.117 "data_offset": 2048, 00:13:42.117 "data_size": 63488 00:13:42.117 }, 00:13:42.117 { 00:13:42.117 "name": "pt4", 00:13:42.117 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:42.117 "is_configured": true, 00:13:42.117 "data_offset": 2048, 00:13:42.117 "data_size": 63488 00:13:42.117 } 00:13:42.117 ] 00:13:42.117 }' 00:13:42.117 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.117 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.685 [2024-11-20 14:24:21.532920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:42.685 [2024-11-20 14:24:21.532959] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:42.685 [2024-11-20 14:24:21.533080] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.685 [2024-11-20 14:24:21.533195] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.685 [2024-11-20 14:24:21.533211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.685 [2024-11-20 14:24:21.616930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:42.685 [2024-11-20 14:24:21.617013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.685 [2024-11-20 14:24:21.617045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:42.685 [2024-11-20 14:24:21.617060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.685 [2024-11-20 14:24:21.619906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.685 [2024-11-20 14:24:21.619952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:42.685 [2024-11-20 14:24:21.620070] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:42.685 [2024-11-20 14:24:21.620132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:42.685 pt2 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.685 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.945 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.945 "name": "raid_bdev1", 00:13:42.945 "uuid": "3ea1bb90-8440-4191-8097-99e6d965aa00", 00:13:42.945 "strip_size_kb": 0, 00:13:42.945 "state": "configuring", 00:13:42.945 "raid_level": "raid1", 00:13:42.945 "superblock": true, 00:13:42.945 "num_base_bdevs": 4, 00:13:42.945 "num_base_bdevs_discovered": 1, 00:13:42.945 "num_base_bdevs_operational": 3, 00:13:42.945 "base_bdevs_list": [ 00:13:42.945 { 00:13:42.945 "name": null, 00:13:42.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.945 "is_configured": false, 00:13:42.945 "data_offset": 2048, 00:13:42.945 "data_size": 63488 00:13:42.945 }, 00:13:42.945 { 00:13:42.945 "name": "pt2", 00:13:42.945 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.945 "is_configured": true, 00:13:42.945 "data_offset": 2048, 00:13:42.945 "data_size": 63488 00:13:42.945 }, 00:13:42.945 { 00:13:42.945 "name": null, 00:13:42.945 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:42.945 "is_configured": false, 00:13:42.945 "data_offset": 2048, 00:13:42.945 "data_size": 63488 00:13:42.945 }, 00:13:42.945 { 00:13:42.945 "name": null, 00:13:42.945 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:42.945 "is_configured": false, 00:13:42.945 "data_offset": 2048, 00:13:42.945 "data_size": 63488 00:13:42.945 } 00:13:42.945 ] 00:13:42.945 }' 00:13:42.945 14:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.945 14:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.204 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:43.204 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:43.204 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:43.204 14:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.204 14:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.204 [2024-11-20 14:24:22.165127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:43.204 [2024-11-20 14:24:22.165348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.204 [2024-11-20 14:24:22.165522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:43.204 [2024-11-20 14:24:22.165649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.204 [2024-11-20 14:24:22.166287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.204 [2024-11-20 14:24:22.166440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:43.204 [2024-11-20 14:24:22.166571] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:43.204 [2024-11-20 14:24:22.166605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:43.204 pt3 00:13:43.204 14:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.204 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:43.204 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.204 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.204 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.204 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.204 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.204 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.204 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.204 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.204 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.204 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.204 14:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.204 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.204 14:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.462 14:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.462 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.462 "name": "raid_bdev1", 00:13:43.462 "uuid": "3ea1bb90-8440-4191-8097-99e6d965aa00", 00:13:43.462 "strip_size_kb": 0, 00:13:43.462 "state": "configuring", 00:13:43.462 "raid_level": "raid1", 00:13:43.462 "superblock": true, 00:13:43.462 "num_base_bdevs": 4, 00:13:43.462 "num_base_bdevs_discovered": 2, 00:13:43.462 "num_base_bdevs_operational": 3, 00:13:43.462 "base_bdevs_list": [ 00:13:43.462 { 00:13:43.462 "name": null, 00:13:43.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.462 "is_configured": false, 00:13:43.462 "data_offset": 2048, 00:13:43.462 "data_size": 63488 00:13:43.462 }, 00:13:43.462 { 00:13:43.462 "name": "pt2", 00:13:43.462 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:43.462 "is_configured": true, 00:13:43.462 "data_offset": 2048, 00:13:43.462 "data_size": 63488 00:13:43.462 }, 00:13:43.462 { 00:13:43.462 "name": "pt3", 00:13:43.462 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:43.462 "is_configured": true, 00:13:43.462 "data_offset": 2048, 00:13:43.462 "data_size": 63488 00:13:43.462 }, 00:13:43.462 { 00:13:43.462 "name": null, 00:13:43.462 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:43.462 "is_configured": false, 00:13:43.462 "data_offset": 2048, 00:13:43.462 "data_size": 63488 00:13:43.462 } 00:13:43.462 ] 00:13:43.462 }' 00:13:43.462 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.462 14:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.721 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:43.721 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:43.721 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:13:43.721 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:43.721 14:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.721 14:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.721 [2024-11-20 14:24:22.673275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:43.721 [2024-11-20 14:24:22.673360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.721 [2024-11-20 14:24:22.673400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:43.721 [2024-11-20 14:24:22.673426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.721 [2024-11-20 14:24:22.673971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.721 [2024-11-20 14:24:22.674022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:43.721 [2024-11-20 14:24:22.674128] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:43.721 [2024-11-20 14:24:22.674160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:43.721 [2024-11-20 14:24:22.674325] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:43.721 [2024-11-20 14:24:22.674346] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:43.721 [2024-11-20 14:24:22.674662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:43.721 [2024-11-20 14:24:22.674851] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:43.721 [2024-11-20 14:24:22.674872] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:43.721 [2024-11-20 14:24:22.675056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.721 pt4 00:13:43.721 14:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.721 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:43.721 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.721 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.721 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.721 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.721 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.721 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.721 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.721 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.721 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.721 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.721 14:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.721 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.721 14:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.721 14:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.980 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.980 "name": "raid_bdev1", 00:13:43.980 "uuid": "3ea1bb90-8440-4191-8097-99e6d965aa00", 00:13:43.980 "strip_size_kb": 0, 00:13:43.980 "state": "online", 00:13:43.981 "raid_level": "raid1", 00:13:43.981 "superblock": true, 00:13:43.981 "num_base_bdevs": 4, 00:13:43.981 "num_base_bdevs_discovered": 3, 00:13:43.981 "num_base_bdevs_operational": 3, 00:13:43.981 "base_bdevs_list": [ 00:13:43.981 { 00:13:43.981 "name": null, 00:13:43.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.981 "is_configured": false, 00:13:43.981 "data_offset": 2048, 00:13:43.981 "data_size": 63488 00:13:43.981 }, 00:13:43.981 { 00:13:43.981 "name": "pt2", 00:13:43.981 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:43.981 "is_configured": true, 00:13:43.981 "data_offset": 2048, 00:13:43.981 "data_size": 63488 00:13:43.981 }, 00:13:43.981 { 00:13:43.981 "name": "pt3", 00:13:43.981 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:43.981 "is_configured": true, 00:13:43.981 "data_offset": 2048, 00:13:43.981 "data_size": 63488 00:13:43.981 }, 00:13:43.981 { 00:13:43.981 "name": "pt4", 00:13:43.981 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:43.981 "is_configured": true, 00:13:43.981 "data_offset": 2048, 00:13:43.981 "data_size": 63488 00:13:43.981 } 00:13:43.981 ] 00:13:43.981 }' 00:13:43.981 14:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.981 14:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.240 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:44.240 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.240 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.240 [2024-11-20 14:24:23.177359] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:44.240 [2024-11-20 14:24:23.177529] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:44.240 [2024-11-20 14:24:23.177737] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.240 [2024-11-20 14:24:23.177938] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.240 [2024-11-20 14:24:23.178109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:44.240 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.240 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.240 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:44.240 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.240 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.240 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.499 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:44.499 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.500 [2024-11-20 14:24:23.253379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:44.500 [2024-11-20 14:24:23.253464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.500 [2024-11-20 14:24:23.253493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:13:44.500 [2024-11-20 14:24:23.253513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.500 [2024-11-20 14:24:23.256393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.500 [2024-11-20 14:24:23.256444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:44.500 [2024-11-20 14:24:23.256553] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:44.500 [2024-11-20 14:24:23.256618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:44.500 [2024-11-20 14:24:23.256792] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:44.500 [2024-11-20 14:24:23.256816] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:44.500 [2024-11-20 14:24:23.256847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:44.500 [2024-11-20 14:24:23.256924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:44.500 [2024-11-20 14:24:23.257099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:44.500 pt1 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.500 "name": "raid_bdev1", 00:13:44.500 "uuid": "3ea1bb90-8440-4191-8097-99e6d965aa00", 00:13:44.500 "strip_size_kb": 0, 00:13:44.500 "state": "configuring", 00:13:44.500 "raid_level": "raid1", 00:13:44.500 "superblock": true, 00:13:44.500 "num_base_bdevs": 4, 00:13:44.500 "num_base_bdevs_discovered": 2, 00:13:44.500 "num_base_bdevs_operational": 3, 00:13:44.500 "base_bdevs_list": [ 00:13:44.500 { 00:13:44.500 "name": null, 00:13:44.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.500 "is_configured": false, 00:13:44.500 "data_offset": 2048, 00:13:44.500 "data_size": 63488 00:13:44.500 }, 00:13:44.500 { 00:13:44.500 "name": "pt2", 00:13:44.500 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:44.500 "is_configured": true, 00:13:44.500 "data_offset": 2048, 00:13:44.500 "data_size": 63488 00:13:44.500 }, 00:13:44.500 { 00:13:44.500 "name": "pt3", 00:13:44.500 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:44.500 "is_configured": true, 00:13:44.500 "data_offset": 2048, 00:13:44.500 "data_size": 63488 00:13:44.500 }, 00:13:44.500 { 00:13:44.500 "name": null, 00:13:44.500 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:44.500 "is_configured": false, 00:13:44.500 "data_offset": 2048, 00:13:44.500 "data_size": 63488 00:13:44.500 } 00:13:44.500 ] 00:13:44.500 }' 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.500 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.068 [2024-11-20 14:24:23.853547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:45.068 [2024-11-20 14:24:23.853624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.068 [2024-11-20 14:24:23.853658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:45.068 [2024-11-20 14:24:23.853674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.068 [2024-11-20 14:24:23.854260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.068 [2024-11-20 14:24:23.854285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:45.068 [2024-11-20 14:24:23.854387] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:45.068 [2024-11-20 14:24:23.854418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:45.068 [2024-11-20 14:24:23.854583] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:45.068 [2024-11-20 14:24:23.854599] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:45.068 [2024-11-20 14:24:23.854915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:45.068 [2024-11-20 14:24:23.855114] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:45.068 [2024-11-20 14:24:23.855134] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:45.068 [2024-11-20 14:24:23.855334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.068 pt4 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.068 "name": "raid_bdev1", 00:13:45.068 "uuid": "3ea1bb90-8440-4191-8097-99e6d965aa00", 00:13:45.068 "strip_size_kb": 0, 00:13:45.068 "state": "online", 00:13:45.068 "raid_level": "raid1", 00:13:45.068 "superblock": true, 00:13:45.068 "num_base_bdevs": 4, 00:13:45.068 "num_base_bdevs_discovered": 3, 00:13:45.068 "num_base_bdevs_operational": 3, 00:13:45.068 "base_bdevs_list": [ 00:13:45.068 { 00:13:45.068 "name": null, 00:13:45.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.068 "is_configured": false, 00:13:45.068 "data_offset": 2048, 00:13:45.068 "data_size": 63488 00:13:45.068 }, 00:13:45.068 { 00:13:45.068 "name": "pt2", 00:13:45.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:45.068 "is_configured": true, 00:13:45.068 "data_offset": 2048, 00:13:45.068 "data_size": 63488 00:13:45.068 }, 00:13:45.068 { 00:13:45.068 "name": "pt3", 00:13:45.068 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:45.068 "is_configured": true, 00:13:45.068 "data_offset": 2048, 00:13:45.068 "data_size": 63488 00:13:45.068 }, 00:13:45.068 { 00:13:45.068 "name": "pt4", 00:13:45.068 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:45.068 "is_configured": true, 00:13:45.068 "data_offset": 2048, 00:13:45.068 "data_size": 63488 00:13:45.068 } 00:13:45.068 ] 00:13:45.068 }' 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.068 14:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.637 14:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:45.637 14:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:45.637 14:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.637 14:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.637 14:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.637 14:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:45.637 14:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:45.637 14:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.637 14:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:45.637 14:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.637 [2024-11-20 14:24:24.430080] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:45.637 14:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.637 14:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3ea1bb90-8440-4191-8097-99e6d965aa00 '!=' 3ea1bb90-8440-4191-8097-99e6d965aa00 ']' 00:13:45.637 14:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74693 00:13:45.637 14:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74693 ']' 00:13:45.637 14:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74693 00:13:45.637 14:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:45.637 14:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:45.637 14:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74693 00:13:45.637 killing process with pid 74693 00:13:45.637 14:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:45.637 14:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:45.637 14:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74693' 00:13:45.637 14:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74693 00:13:45.637 [2024-11-20 14:24:24.508523] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:45.637 14:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74693 00:13:45.637 [2024-11-20 14:24:24.508646] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:45.637 [2024-11-20 14:24:24.508743] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:45.637 [2024-11-20 14:24:24.508762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:45.896 [2024-11-20 14:24:24.867123] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:47.272 ************************************ 00:13:47.272 END TEST raid_superblock_test 00:13:47.272 ************************************ 00:13:47.272 14:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:47.272 00:13:47.272 real 0m9.447s 00:13:47.272 user 0m15.542s 00:13:47.272 sys 0m1.348s 00:13:47.272 14:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.272 14:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.272 14:24:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:13:47.272 14:24:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:47.272 14:24:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:47.272 14:24:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:47.272 ************************************ 00:13:47.272 START TEST raid_read_error_test 00:13:47.272 ************************************ 00:13:47.272 14:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:13:47.272 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:47.272 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:47.272 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:47.272 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pQP8ObBq8B 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75190 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75190 00:13:47.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75190 ']' 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:47.273 14:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.273 [2024-11-20 14:24:26.078252] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:13:47.273 [2024-11-20 14:24:26.078447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75190 ] 00:13:47.531 [2024-11-20 14:24:26.259737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.531 [2024-11-20 14:24:26.382792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.790 [2024-11-20 14:24:26.579724] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.790 [2024-11-20 14:24:26.580007] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:48.368 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:48.368 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:48.368 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:48.368 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:48.368 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.368 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.368 BaseBdev1_malloc 00:13:48.368 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.368 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:48.369 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.369 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.369 true 00:13:48.369 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.369 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:48.369 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.369 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.369 [2024-11-20 14:24:27.122114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:48.369 [2024-11-20 14:24:27.122178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.369 [2024-11-20 14:24:27.122206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:48.369 [2024-11-20 14:24:27.122224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.369 [2024-11-20 14:24:27.124939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.369 [2024-11-20 14:24:27.125035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:48.369 BaseBdev1 00:13:48.369 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.369 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:48.369 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:48.369 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.369 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.369 BaseBdev2_malloc 00:13:48.369 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.369 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:48.369 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.369 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.369 true 00:13:48.369 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.369 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:48.369 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.369 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.369 [2024-11-20 14:24:27.177628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:48.369 [2024-11-20 14:24:27.177693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.369 [2024-11-20 14:24:27.177717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:48.369 [2024-11-20 14:24:27.177734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.369 [2024-11-20 14:24:27.180670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.369 [2024-11-20 14:24:27.180749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:48.369 BaseBdev2 00:13:48.369 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.369 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:48.369 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:48.369 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.369 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.369 BaseBdev3_malloc 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.370 true 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.370 [2024-11-20 14:24:27.247528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:48.370 [2024-11-20 14:24:27.247595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.370 [2024-11-20 14:24:27.247621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:48.370 [2024-11-20 14:24:27.247638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.370 [2024-11-20 14:24:27.250457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.370 [2024-11-20 14:24:27.250519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:48.370 BaseBdev3 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.370 BaseBdev4_malloc 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.370 true 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.370 [2024-11-20 14:24:27.305591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:48.370 [2024-11-20 14:24:27.305668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.370 [2024-11-20 14:24:27.305693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:48.370 [2024-11-20 14:24:27.305708] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.370 [2024-11-20 14:24:27.308504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.370 [2024-11-20 14:24:27.308569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:48.370 BaseBdev4 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.370 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.370 [2024-11-20 14:24:27.313651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.370 [2024-11-20 14:24:27.316218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:48.370 [2024-11-20 14:24:27.316364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:48.370 [2024-11-20 14:24:27.316457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:48.370 [2024-11-20 14:24:27.316741] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:48.371 [2024-11-20 14:24:27.316762] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:48.371 [2024-11-20 14:24:27.317128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:48.371 [2024-11-20 14:24:27.317411] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:48.371 [2024-11-20 14:24:27.317432] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:48.371 [2024-11-20 14:24:27.317670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.371 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.371 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:48.371 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.371 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.371 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.371 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.371 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.372 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.372 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.372 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.372 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.372 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.372 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.372 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.372 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.372 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.630 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.630 "name": "raid_bdev1", 00:13:48.630 "uuid": "06f6e316-b971-4448-892c-2139f4c19338", 00:13:48.630 "strip_size_kb": 0, 00:13:48.630 "state": "online", 00:13:48.630 "raid_level": "raid1", 00:13:48.630 "superblock": true, 00:13:48.630 "num_base_bdevs": 4, 00:13:48.630 "num_base_bdevs_discovered": 4, 00:13:48.630 "num_base_bdevs_operational": 4, 00:13:48.630 "base_bdevs_list": [ 00:13:48.630 { 00:13:48.630 "name": "BaseBdev1", 00:13:48.630 "uuid": "8778efc0-2789-59da-b099-ec47e8591010", 00:13:48.630 "is_configured": true, 00:13:48.630 "data_offset": 2048, 00:13:48.630 "data_size": 63488 00:13:48.630 }, 00:13:48.630 { 00:13:48.630 "name": "BaseBdev2", 00:13:48.630 "uuid": "b27284ab-d74f-5d24-8d29-254f48b7cef9", 00:13:48.630 "is_configured": true, 00:13:48.630 "data_offset": 2048, 00:13:48.630 "data_size": 63488 00:13:48.630 }, 00:13:48.630 { 00:13:48.630 "name": "BaseBdev3", 00:13:48.630 "uuid": "05a85744-0d68-5bbc-8aca-2ece445086b2", 00:13:48.630 "is_configured": true, 00:13:48.630 "data_offset": 2048, 00:13:48.630 "data_size": 63488 00:13:48.630 }, 00:13:48.630 { 00:13:48.630 "name": "BaseBdev4", 00:13:48.630 "uuid": "cc997e2d-2688-5ce0-85e3-07d733dd9283", 00:13:48.630 "is_configured": true, 00:13:48.630 "data_offset": 2048, 00:13:48.630 "data_size": 63488 00:13:48.630 } 00:13:48.630 ] 00:13:48.630 }' 00:13:48.630 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.630 14:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.888 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:48.888 14:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:49.146 [2024-11-20 14:24:27.963281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.130 "name": "raid_bdev1", 00:13:50.130 "uuid": "06f6e316-b971-4448-892c-2139f4c19338", 00:13:50.130 "strip_size_kb": 0, 00:13:50.130 "state": "online", 00:13:50.130 "raid_level": "raid1", 00:13:50.130 "superblock": true, 00:13:50.130 "num_base_bdevs": 4, 00:13:50.130 "num_base_bdevs_discovered": 4, 00:13:50.130 "num_base_bdevs_operational": 4, 00:13:50.130 "base_bdevs_list": [ 00:13:50.130 { 00:13:50.130 "name": "BaseBdev1", 00:13:50.130 "uuid": "8778efc0-2789-59da-b099-ec47e8591010", 00:13:50.130 "is_configured": true, 00:13:50.130 "data_offset": 2048, 00:13:50.130 "data_size": 63488 00:13:50.130 }, 00:13:50.130 { 00:13:50.130 "name": "BaseBdev2", 00:13:50.130 "uuid": "b27284ab-d74f-5d24-8d29-254f48b7cef9", 00:13:50.130 "is_configured": true, 00:13:50.130 "data_offset": 2048, 00:13:50.130 "data_size": 63488 00:13:50.130 }, 00:13:50.130 { 00:13:50.130 "name": "BaseBdev3", 00:13:50.130 "uuid": "05a85744-0d68-5bbc-8aca-2ece445086b2", 00:13:50.130 "is_configured": true, 00:13:50.130 "data_offset": 2048, 00:13:50.130 "data_size": 63488 00:13:50.130 }, 00:13:50.130 { 00:13:50.130 "name": "BaseBdev4", 00:13:50.130 "uuid": "cc997e2d-2688-5ce0-85e3-07d733dd9283", 00:13:50.130 "is_configured": true, 00:13:50.130 "data_offset": 2048, 00:13:50.130 "data_size": 63488 00:13:50.130 } 00:13:50.130 ] 00:13:50.130 }' 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.130 14:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.393 14:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:50.393 14:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.393 14:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.652 [2024-11-20 14:24:29.377430] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:50.652 [2024-11-20 14:24:29.377601] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:50.652 [2024-11-20 14:24:29.381144] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.652 [2024-11-20 14:24:29.381220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.652 [2024-11-20 14:24:29.381436] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:50.652 [2024-11-20 14:24:29.381459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:50.652 { 00:13:50.652 "results": [ 00:13:50.652 { 00:13:50.652 "job": "raid_bdev1", 00:13:50.652 "core_mask": "0x1", 00:13:50.652 "workload": "randrw", 00:13:50.652 "percentage": 50, 00:13:50.652 "status": "finished", 00:13:50.652 "queue_depth": 1, 00:13:50.652 "io_size": 131072, 00:13:50.652 "runtime": 1.411667, 00:13:50.652 "iops": 7673.197715891921, 00:13:50.652 "mibps": 959.1497144864901, 00:13:50.652 "io_failed": 0, 00:13:50.652 "io_timeout": 0, 00:13:50.652 "avg_latency_us": 126.14912850812408, 00:13:50.652 "min_latency_us": 39.79636363636364, 00:13:50.652 "max_latency_us": 2010.7636363636364 00:13:50.652 } 00:13:50.652 ], 00:13:50.652 "core_count": 1 00:13:50.652 } 00:13:50.652 14:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.652 14:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75190 00:13:50.652 14:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75190 ']' 00:13:50.652 14:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75190 00:13:50.652 14:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:50.652 14:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:50.652 14:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75190 00:13:50.652 killing process with pid 75190 00:13:50.652 14:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:50.652 14:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:50.652 14:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75190' 00:13:50.652 14:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75190 00:13:50.652 [2024-11-20 14:24:29.421031] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:50.652 14:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75190 00:13:50.912 [2024-11-20 14:24:29.718865] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:52.288 14:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pQP8ObBq8B 00:13:52.288 14:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:52.288 14:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:52.288 14:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:52.288 14:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:52.288 ************************************ 00:13:52.288 END TEST raid_read_error_test 00:13:52.288 ************************************ 00:13:52.288 14:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:52.288 14:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:52.288 14:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:52.288 00:13:52.288 real 0m4.902s 00:13:52.288 user 0m6.041s 00:13:52.288 sys 0m0.597s 00:13:52.288 14:24:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.288 14:24:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.288 14:24:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:13:52.288 14:24:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:52.288 14:24:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.288 14:24:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:52.288 ************************************ 00:13:52.288 START TEST raid_write_error_test 00:13:52.288 ************************************ 00:13:52.288 14:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:13:52.288 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:52.288 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:52.288 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:52.288 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:52.288 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:52.288 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:52.288 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:52.288 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:52.288 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:52.288 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:52.288 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:52.288 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:52.288 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:52.288 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:52.288 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:52.288 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:52.289 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:52.289 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:52.289 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:52.289 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:52.289 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:52.289 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:52.289 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:52.289 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:52.289 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:52.289 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:52.289 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:52.289 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Kd4jOftT48 00:13:52.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.289 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75337 00:13:52.289 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75337 00:13:52.289 14:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:52.289 14:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75337 ']' 00:13:52.289 14:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.289 14:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:52.289 14:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.289 14:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:52.289 14:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.289 [2024-11-20 14:24:31.043741] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:13:52.289 [2024-11-20 14:24:31.043924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75337 ] 00:13:52.289 [2024-11-20 14:24:31.226527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.547 [2024-11-20 14:24:31.360652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.806 [2024-11-20 14:24:31.568602] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.806 [2024-11-20 14:24:31.568649] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.065 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:53.065 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:53.065 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:53.065 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:53.065 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.065 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.325 BaseBdev1_malloc 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.325 true 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.325 [2024-11-20 14:24:32.075841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:53.325 [2024-11-20 14:24:32.075913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.325 [2024-11-20 14:24:32.075944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:53.325 [2024-11-20 14:24:32.075962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.325 [2024-11-20 14:24:32.078954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.325 [2024-11-20 14:24:32.079009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:53.325 BaseBdev1 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.325 BaseBdev2_malloc 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.325 true 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.325 [2024-11-20 14:24:32.133648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:53.325 [2024-11-20 14:24:32.133715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.325 [2024-11-20 14:24:32.133742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:53.325 [2024-11-20 14:24:32.133760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.325 [2024-11-20 14:24:32.136579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.325 [2024-11-20 14:24:32.136628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:53.325 BaseBdev2 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.325 BaseBdev3_malloc 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.325 true 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.325 [2024-11-20 14:24:32.197902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:53.325 [2024-11-20 14:24:32.197967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.325 [2024-11-20 14:24:32.198008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:53.325 [2024-11-20 14:24:32.198028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.325 [2024-11-20 14:24:32.200983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.325 [2024-11-20 14:24:32.201223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:53.325 BaseBdev3 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:53.325 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.326 BaseBdev4_malloc 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.326 true 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.326 [2024-11-20 14:24:32.253764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:53.326 [2024-11-20 14:24:32.253829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.326 [2024-11-20 14:24:32.253856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:53.326 [2024-11-20 14:24:32.253874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.326 [2024-11-20 14:24:32.256678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.326 [2024-11-20 14:24:32.256755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:53.326 BaseBdev4 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.326 [2024-11-20 14:24:32.261815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:53.326 [2024-11-20 14:24:32.264322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:53.326 [2024-11-20 14:24:32.264427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:53.326 [2024-11-20 14:24:32.264524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:53.326 [2024-11-20 14:24:32.264835] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:53.326 [2024-11-20 14:24:32.264861] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:53.326 [2024-11-20 14:24:32.265189] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:53.326 [2024-11-20 14:24:32.265406] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:53.326 [2024-11-20 14:24:32.265429] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:53.326 [2024-11-20 14:24:32.265618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.326 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.586 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.586 "name": "raid_bdev1", 00:13:53.586 "uuid": "0b717286-02f0-4885-ae15-b757bf3548ef", 00:13:53.586 "strip_size_kb": 0, 00:13:53.586 "state": "online", 00:13:53.586 "raid_level": "raid1", 00:13:53.586 "superblock": true, 00:13:53.586 "num_base_bdevs": 4, 00:13:53.586 "num_base_bdevs_discovered": 4, 00:13:53.586 "num_base_bdevs_operational": 4, 00:13:53.586 "base_bdevs_list": [ 00:13:53.586 { 00:13:53.586 "name": "BaseBdev1", 00:13:53.586 "uuid": "21adde86-bc7a-5ac8-8256-c4732cb68c01", 00:13:53.586 "is_configured": true, 00:13:53.586 "data_offset": 2048, 00:13:53.586 "data_size": 63488 00:13:53.586 }, 00:13:53.586 { 00:13:53.586 "name": "BaseBdev2", 00:13:53.586 "uuid": "32ed7000-e864-52b6-bb60-dae6b541279a", 00:13:53.586 "is_configured": true, 00:13:53.586 "data_offset": 2048, 00:13:53.586 "data_size": 63488 00:13:53.586 }, 00:13:53.586 { 00:13:53.586 "name": "BaseBdev3", 00:13:53.586 "uuid": "8626872a-b347-569e-9080-d29fcab72956", 00:13:53.586 "is_configured": true, 00:13:53.586 "data_offset": 2048, 00:13:53.586 "data_size": 63488 00:13:53.586 }, 00:13:53.586 { 00:13:53.586 "name": "BaseBdev4", 00:13:53.586 "uuid": "c28d244f-f27d-5720-8681-5a21a8244756", 00:13:53.586 "is_configured": true, 00:13:53.586 "data_offset": 2048, 00:13:53.586 "data_size": 63488 00:13:53.586 } 00:13:53.586 ] 00:13:53.586 }' 00:13:53.586 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.586 14:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.844 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:53.844 14:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:54.104 [2024-11-20 14:24:32.915513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:55.043 14:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:55.043 14:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.043 14:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.043 [2024-11-20 14:24:33.799815] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:55.043 [2024-11-20 14:24:33.800037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:55.043 [2024-11-20 14:24:33.800332] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:13:55.043 14:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.043 14:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:55.043 14:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:55.043 14:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:55.043 14:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:55.043 14:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:55.043 14:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.043 14:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.043 14:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.043 14:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.043 14:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.043 14:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.043 14:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.043 14:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.043 14:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.043 14:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.044 14:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.044 14:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.044 14:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.044 14:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.044 14:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.044 "name": "raid_bdev1", 00:13:55.044 "uuid": "0b717286-02f0-4885-ae15-b757bf3548ef", 00:13:55.044 "strip_size_kb": 0, 00:13:55.044 "state": "online", 00:13:55.044 "raid_level": "raid1", 00:13:55.044 "superblock": true, 00:13:55.044 "num_base_bdevs": 4, 00:13:55.044 "num_base_bdevs_discovered": 3, 00:13:55.044 "num_base_bdevs_operational": 3, 00:13:55.044 "base_bdevs_list": [ 00:13:55.044 { 00:13:55.044 "name": null, 00:13:55.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.044 "is_configured": false, 00:13:55.044 "data_offset": 0, 00:13:55.044 "data_size": 63488 00:13:55.044 }, 00:13:55.044 { 00:13:55.044 "name": "BaseBdev2", 00:13:55.044 "uuid": "32ed7000-e864-52b6-bb60-dae6b541279a", 00:13:55.044 "is_configured": true, 00:13:55.044 "data_offset": 2048, 00:13:55.044 "data_size": 63488 00:13:55.044 }, 00:13:55.044 { 00:13:55.044 "name": "BaseBdev3", 00:13:55.044 "uuid": "8626872a-b347-569e-9080-d29fcab72956", 00:13:55.044 "is_configured": true, 00:13:55.044 "data_offset": 2048, 00:13:55.044 "data_size": 63488 00:13:55.044 }, 00:13:55.044 { 00:13:55.044 "name": "BaseBdev4", 00:13:55.044 "uuid": "c28d244f-f27d-5720-8681-5a21a8244756", 00:13:55.044 "is_configured": true, 00:13:55.044 "data_offset": 2048, 00:13:55.044 "data_size": 63488 00:13:55.044 } 00:13:55.044 ] 00:13:55.044 }' 00:13:55.044 14:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.044 14:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.612 14:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:55.612 14:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.612 14:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.612 [2024-11-20 14:24:34.364568] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:55.612 [2024-11-20 14:24:34.364762] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:55.612 [2024-11-20 14:24:34.368307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:55.612 [2024-11-20 14:24:34.368557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.612 [2024-11-20 14:24:34.368801] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:55.612 [2024-11-20 14:24:34.368943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, sta{ 00:13:55.612 "results": [ 00:13:55.612 { 00:13:55.612 "job": "raid_bdev1", 00:13:55.612 "core_mask": "0x1", 00:13:55.612 "workload": "randrw", 00:13:55.612 "percentage": 50, 00:13:55.612 "status": "finished", 00:13:55.612 "queue_depth": 1, 00:13:55.612 "io_size": 131072, 00:13:55.612 "runtime": 1.446701, 00:13:55.612 "iops": 8402.565561232072, 00:13:55.612 "mibps": 1050.320695154009, 00:13:55.612 "io_failed": 0, 00:13:55.612 "io_timeout": 0, 00:13:55.612 "avg_latency_us": 114.67756977474649, 00:13:55.612 "min_latency_us": 39.56363636363636, 00:13:55.612 "max_latency_us": 1936.290909090909 00:13:55.612 } 00:13:55.612 ], 00:13:55.612 "core_count": 1 00:13:55.612 } 00:13:55.612 te offline 00:13:55.612 14:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.612 14:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75337 00:13:55.612 14:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75337 ']' 00:13:55.612 14:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75337 00:13:55.612 14:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:55.612 14:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:55.612 14:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75337 00:13:55.612 killing process with pid 75337 00:13:55.612 14:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:55.612 14:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:55.612 14:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75337' 00:13:55.612 14:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75337 00:13:55.612 [2024-11-20 14:24:34.408203] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:55.612 14:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75337 00:13:55.870 [2024-11-20 14:24:34.693429] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:56.803 14:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Kd4jOftT48 00:13:56.803 14:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:56.803 14:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:56.803 ************************************ 00:13:56.803 END TEST raid_write_error_test 00:13:56.803 ************************************ 00:13:56.803 14:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:56.803 14:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:56.803 14:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:56.803 14:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:56.803 14:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:56.803 00:13:56.803 real 0m4.855s 00:13:56.803 user 0m6.026s 00:13:56.803 sys 0m0.605s 00:13:56.803 14:24:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.803 14:24:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.061 14:24:35 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:13:57.061 14:24:35 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:57.061 14:24:35 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:13:57.061 14:24:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:57.061 14:24:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.061 14:24:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:57.061 ************************************ 00:13:57.061 START TEST raid_rebuild_test 00:13:57.061 ************************************ 00:13:57.061 14:24:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:57.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75481 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75481 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75481 ']' 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:57.062 14:24:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.062 [2024-11-20 14:24:35.923326] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:13:57.062 [2024-11-20 14:24:35.923682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:57.062 Zero copy mechanism will not be used. 00:13:57.062 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75481 ] 00:13:57.320 [2024-11-20 14:24:36.095178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.320 [2024-11-20 14:24:36.219524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.584 [2024-11-20 14:24:36.414623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.584 [2024-11-20 14:24:36.414908] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.181 14:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.181 14:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:58.181 14:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:58.181 14:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:58.181 14:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.181 14:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.181 BaseBdev1_malloc 00:13:58.181 14:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.181 14:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:58.181 14:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.181 14:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.181 [2024-11-20 14:24:36.944977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:58.181 [2024-11-20 14:24:36.945077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.181 [2024-11-20 14:24:36.945107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:58.181 [2024-11-20 14:24:36.945124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.181 [2024-11-20 14:24:36.947736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.181 [2024-11-20 14:24:36.947943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:58.181 BaseBdev1 00:13:58.181 14:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.181 14:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:58.181 14:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:58.181 14:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.181 14:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.181 BaseBdev2_malloc 00:13:58.181 14:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.181 14:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:58.181 14:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.181 14:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.181 [2024-11-20 14:24:36.993134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:58.181 [2024-11-20 14:24:36.993198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.181 [2024-11-20 14:24:36.993228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:58.181 [2024-11-20 14:24:36.993258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.181 [2024-11-20 14:24:36.995822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.182 [2024-11-20 14:24:36.996042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:58.182 BaseBdev2 00:13:58.182 14:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.182 14:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:58.182 14:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.182 14:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.182 spare_malloc 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.182 spare_delay 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.182 [2024-11-20 14:24:37.054075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:58.182 [2024-11-20 14:24:37.054137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.182 [2024-11-20 14:24:37.054163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:58.182 [2024-11-20 14:24:37.054179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.182 [2024-11-20 14:24:37.056842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.182 [2024-11-20 14:24:37.056920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:58.182 spare 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.182 [2024-11-20 14:24:37.062123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:58.182 [2024-11-20 14:24:37.064586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:58.182 [2024-11-20 14:24:37.064693] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:58.182 [2024-11-20 14:24:37.064712] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:58.182 [2024-11-20 14:24:37.064990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:58.182 [2024-11-20 14:24:37.065214] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:58.182 [2024-11-20 14:24:37.065232] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:58.182 [2024-11-20 14:24:37.065450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.182 "name": "raid_bdev1", 00:13:58.182 "uuid": "9d738984-1515-4349-b631-3d2d943bd400", 00:13:58.182 "strip_size_kb": 0, 00:13:58.182 "state": "online", 00:13:58.182 "raid_level": "raid1", 00:13:58.182 "superblock": false, 00:13:58.182 "num_base_bdevs": 2, 00:13:58.182 "num_base_bdevs_discovered": 2, 00:13:58.182 "num_base_bdevs_operational": 2, 00:13:58.182 "base_bdevs_list": [ 00:13:58.182 { 00:13:58.182 "name": "BaseBdev1", 00:13:58.182 "uuid": "7a810feb-bbd4-5c36-97a9-f5a452ed36e4", 00:13:58.182 "is_configured": true, 00:13:58.182 "data_offset": 0, 00:13:58.182 "data_size": 65536 00:13:58.182 }, 00:13:58.182 { 00:13:58.182 "name": "BaseBdev2", 00:13:58.182 "uuid": "7bbc18bf-af19-5cbf-a672-c1b2a6eb0062", 00:13:58.182 "is_configured": true, 00:13:58.182 "data_offset": 0, 00:13:58.182 "data_size": 65536 00:13:58.182 } 00:13:58.182 ] 00:13:58.182 }' 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.182 14:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.750 [2024-11-20 14:24:37.575256] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:58.750 14:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:59.008 [2024-11-20 14:24:37.959075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:59.008 /dev/nbd0 00:13:59.267 14:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:59.267 14:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:59.267 14:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:59.267 14:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:59.267 14:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:59.267 14:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:59.267 14:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:59.267 14:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:59.267 14:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:59.267 14:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:59.267 14:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:59.267 1+0 records in 00:13:59.267 1+0 records out 00:13:59.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358087 s, 11.4 MB/s 00:13:59.267 14:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.267 14:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:59.267 14:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.267 14:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:59.267 14:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:59.267 14:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:59.267 14:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:59.267 14:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:59.267 14:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:59.267 14:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:05.830 65536+0 records in 00:14:05.830 65536+0 records out 00:14:05.830 33554432 bytes (34 MB, 32 MiB) copied, 6.46858 s, 5.2 MB/s 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:05.830 [2024-11-20 14:24:44.784533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.830 [2024-11-20 14:24:44.803133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.830 14:24:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.090 14:24:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.090 14:24:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.090 14:24:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.090 14:24:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.090 14:24:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.090 14:24:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.090 "name": "raid_bdev1", 00:14:06.090 "uuid": "9d738984-1515-4349-b631-3d2d943bd400", 00:14:06.090 "strip_size_kb": 0, 00:14:06.090 "state": "online", 00:14:06.090 "raid_level": "raid1", 00:14:06.090 "superblock": false, 00:14:06.090 "num_base_bdevs": 2, 00:14:06.090 "num_base_bdevs_discovered": 1, 00:14:06.090 "num_base_bdevs_operational": 1, 00:14:06.090 "base_bdevs_list": [ 00:14:06.090 { 00:14:06.090 "name": null, 00:14:06.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.090 "is_configured": false, 00:14:06.090 "data_offset": 0, 00:14:06.090 "data_size": 65536 00:14:06.090 }, 00:14:06.090 { 00:14:06.090 "name": "BaseBdev2", 00:14:06.090 "uuid": "7bbc18bf-af19-5cbf-a672-c1b2a6eb0062", 00:14:06.090 "is_configured": true, 00:14:06.090 "data_offset": 0, 00:14:06.090 "data_size": 65536 00:14:06.090 } 00:14:06.090 ] 00:14:06.090 }' 00:14:06.090 14:24:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.090 14:24:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.349 14:24:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:06.349 14:24:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.349 14:24:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.349 [2024-11-20 14:24:45.271296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:06.349 [2024-11-20 14:24:45.287837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:14:06.349 14:24:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.349 14:24:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:06.349 [2024-11-20 14:24:45.290476] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:07.385 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.385 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.385 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.385 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.385 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.385 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.385 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.385 14:24:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.385 14:24:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.385 14:24:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.689 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.689 "name": "raid_bdev1", 00:14:07.689 "uuid": "9d738984-1515-4349-b631-3d2d943bd400", 00:14:07.689 "strip_size_kb": 0, 00:14:07.689 "state": "online", 00:14:07.689 "raid_level": "raid1", 00:14:07.689 "superblock": false, 00:14:07.689 "num_base_bdevs": 2, 00:14:07.689 "num_base_bdevs_discovered": 2, 00:14:07.689 "num_base_bdevs_operational": 2, 00:14:07.689 "process": { 00:14:07.689 "type": "rebuild", 00:14:07.689 "target": "spare", 00:14:07.689 "progress": { 00:14:07.689 "blocks": 20480, 00:14:07.689 "percent": 31 00:14:07.689 } 00:14:07.689 }, 00:14:07.689 "base_bdevs_list": [ 00:14:07.689 { 00:14:07.689 "name": "spare", 00:14:07.689 "uuid": "c342e328-a37c-5240-a5c5-f39e4439e01a", 00:14:07.689 "is_configured": true, 00:14:07.689 "data_offset": 0, 00:14:07.689 "data_size": 65536 00:14:07.689 }, 00:14:07.689 { 00:14:07.689 "name": "BaseBdev2", 00:14:07.689 "uuid": "7bbc18bf-af19-5cbf-a672-c1b2a6eb0062", 00:14:07.689 "is_configured": true, 00:14:07.689 "data_offset": 0, 00:14:07.689 "data_size": 65536 00:14:07.689 } 00:14:07.689 ] 00:14:07.689 }' 00:14:07.689 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.689 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.689 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.689 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.689 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:07.689 14:24:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.689 14:24:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.689 [2024-11-20 14:24:46.460113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:07.690 [2024-11-20 14:24:46.499563] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:07.690 [2024-11-20 14:24:46.499663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.690 [2024-11-20 14:24:46.499688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:07.690 [2024-11-20 14:24:46.499703] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:07.690 14:24:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.690 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:07.690 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.690 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.690 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.690 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.690 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:07.690 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.690 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.690 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.690 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.690 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.690 14:24:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.690 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.690 14:24:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.690 14:24:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.690 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.690 "name": "raid_bdev1", 00:14:07.690 "uuid": "9d738984-1515-4349-b631-3d2d943bd400", 00:14:07.690 "strip_size_kb": 0, 00:14:07.690 "state": "online", 00:14:07.690 "raid_level": "raid1", 00:14:07.690 "superblock": false, 00:14:07.690 "num_base_bdevs": 2, 00:14:07.690 "num_base_bdevs_discovered": 1, 00:14:07.690 "num_base_bdevs_operational": 1, 00:14:07.690 "base_bdevs_list": [ 00:14:07.690 { 00:14:07.690 "name": null, 00:14:07.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.690 "is_configured": false, 00:14:07.690 "data_offset": 0, 00:14:07.690 "data_size": 65536 00:14:07.690 }, 00:14:07.690 { 00:14:07.690 "name": "BaseBdev2", 00:14:07.690 "uuid": "7bbc18bf-af19-5cbf-a672-c1b2a6eb0062", 00:14:07.690 "is_configured": true, 00:14:07.690 "data_offset": 0, 00:14:07.690 "data_size": 65536 00:14:07.690 } 00:14:07.690 ] 00:14:07.690 }' 00:14:07.690 14:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.690 14:24:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.257 14:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:08.257 14:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.257 14:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:08.257 14:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:08.257 14:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.257 14:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.257 14:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.257 14:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.257 14:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.257 14:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.257 14:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.257 "name": "raid_bdev1", 00:14:08.257 "uuid": "9d738984-1515-4349-b631-3d2d943bd400", 00:14:08.257 "strip_size_kb": 0, 00:14:08.257 "state": "online", 00:14:08.257 "raid_level": "raid1", 00:14:08.257 "superblock": false, 00:14:08.257 "num_base_bdevs": 2, 00:14:08.257 "num_base_bdevs_discovered": 1, 00:14:08.257 "num_base_bdevs_operational": 1, 00:14:08.257 "base_bdevs_list": [ 00:14:08.258 { 00:14:08.258 "name": null, 00:14:08.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.258 "is_configured": false, 00:14:08.258 "data_offset": 0, 00:14:08.258 "data_size": 65536 00:14:08.258 }, 00:14:08.258 { 00:14:08.258 "name": "BaseBdev2", 00:14:08.258 "uuid": "7bbc18bf-af19-5cbf-a672-c1b2a6eb0062", 00:14:08.258 "is_configured": true, 00:14:08.258 "data_offset": 0, 00:14:08.258 "data_size": 65536 00:14:08.258 } 00:14:08.258 ] 00:14:08.258 }' 00:14:08.258 14:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.258 14:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:08.258 14:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.258 14:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:08.258 14:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:08.258 14:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.258 14:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.258 [2024-11-20 14:24:47.180151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:08.258 [2024-11-20 14:24:47.196387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:14:08.258 14:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.258 14:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:08.258 [2024-11-20 14:24:47.198883] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:09.634 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.634 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.634 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.634 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.634 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.634 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.634 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.634 14:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.634 14:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.634 14:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.634 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.634 "name": "raid_bdev1", 00:14:09.634 "uuid": "9d738984-1515-4349-b631-3d2d943bd400", 00:14:09.634 "strip_size_kb": 0, 00:14:09.634 "state": "online", 00:14:09.634 "raid_level": "raid1", 00:14:09.634 "superblock": false, 00:14:09.634 "num_base_bdevs": 2, 00:14:09.634 "num_base_bdevs_discovered": 2, 00:14:09.634 "num_base_bdevs_operational": 2, 00:14:09.634 "process": { 00:14:09.634 "type": "rebuild", 00:14:09.634 "target": "spare", 00:14:09.634 "progress": { 00:14:09.634 "blocks": 20480, 00:14:09.634 "percent": 31 00:14:09.634 } 00:14:09.634 }, 00:14:09.634 "base_bdevs_list": [ 00:14:09.634 { 00:14:09.634 "name": "spare", 00:14:09.634 "uuid": "c342e328-a37c-5240-a5c5-f39e4439e01a", 00:14:09.634 "is_configured": true, 00:14:09.634 "data_offset": 0, 00:14:09.634 "data_size": 65536 00:14:09.634 }, 00:14:09.634 { 00:14:09.634 "name": "BaseBdev2", 00:14:09.634 "uuid": "7bbc18bf-af19-5cbf-a672-c1b2a6eb0062", 00:14:09.634 "is_configured": true, 00:14:09.634 "data_offset": 0, 00:14:09.634 "data_size": 65536 00:14:09.634 } 00:14:09.634 ] 00:14:09.634 }' 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=395 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.635 "name": "raid_bdev1", 00:14:09.635 "uuid": "9d738984-1515-4349-b631-3d2d943bd400", 00:14:09.635 "strip_size_kb": 0, 00:14:09.635 "state": "online", 00:14:09.635 "raid_level": "raid1", 00:14:09.635 "superblock": false, 00:14:09.635 "num_base_bdevs": 2, 00:14:09.635 "num_base_bdevs_discovered": 2, 00:14:09.635 "num_base_bdevs_operational": 2, 00:14:09.635 "process": { 00:14:09.635 "type": "rebuild", 00:14:09.635 "target": "spare", 00:14:09.635 "progress": { 00:14:09.635 "blocks": 22528, 00:14:09.635 "percent": 34 00:14:09.635 } 00:14:09.635 }, 00:14:09.635 "base_bdevs_list": [ 00:14:09.635 { 00:14:09.635 "name": "spare", 00:14:09.635 "uuid": "c342e328-a37c-5240-a5c5-f39e4439e01a", 00:14:09.635 "is_configured": true, 00:14:09.635 "data_offset": 0, 00:14:09.635 "data_size": 65536 00:14:09.635 }, 00:14:09.635 { 00:14:09.635 "name": "BaseBdev2", 00:14:09.635 "uuid": "7bbc18bf-af19-5cbf-a672-c1b2a6eb0062", 00:14:09.635 "is_configured": true, 00:14:09.635 "data_offset": 0, 00:14:09.635 "data_size": 65536 00:14:09.635 } 00:14:09.635 ] 00:14:09.635 }' 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.635 14:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:10.572 14:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:10.573 14:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.573 14:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.573 14:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.573 14:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.573 14:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.573 14:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.573 14:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.573 14:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.573 14:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.573 14:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.830 14:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.830 "name": "raid_bdev1", 00:14:10.830 "uuid": "9d738984-1515-4349-b631-3d2d943bd400", 00:14:10.830 "strip_size_kb": 0, 00:14:10.830 "state": "online", 00:14:10.830 "raid_level": "raid1", 00:14:10.830 "superblock": false, 00:14:10.830 "num_base_bdevs": 2, 00:14:10.830 "num_base_bdevs_discovered": 2, 00:14:10.830 "num_base_bdevs_operational": 2, 00:14:10.830 "process": { 00:14:10.830 "type": "rebuild", 00:14:10.830 "target": "spare", 00:14:10.830 "progress": { 00:14:10.830 "blocks": 47104, 00:14:10.830 "percent": 71 00:14:10.830 } 00:14:10.830 }, 00:14:10.830 "base_bdevs_list": [ 00:14:10.830 { 00:14:10.830 "name": "spare", 00:14:10.830 "uuid": "c342e328-a37c-5240-a5c5-f39e4439e01a", 00:14:10.830 "is_configured": true, 00:14:10.830 "data_offset": 0, 00:14:10.830 "data_size": 65536 00:14:10.830 }, 00:14:10.830 { 00:14:10.830 "name": "BaseBdev2", 00:14:10.830 "uuid": "7bbc18bf-af19-5cbf-a672-c1b2a6eb0062", 00:14:10.830 "is_configured": true, 00:14:10.830 "data_offset": 0, 00:14:10.830 "data_size": 65536 00:14:10.830 } 00:14:10.830 ] 00:14:10.830 }' 00:14:10.830 14:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.830 14:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:10.830 14:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.830 14:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:10.830 14:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:11.765 [2024-11-20 14:24:50.422410] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:11.765 [2024-11-20 14:24:50.422510] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:11.765 [2024-11-20 14:24:50.422592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.765 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:11.765 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.766 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.766 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.766 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.766 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.766 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.766 14:24:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.766 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.766 14:24:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.766 14:24:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.766 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.766 "name": "raid_bdev1", 00:14:11.766 "uuid": "9d738984-1515-4349-b631-3d2d943bd400", 00:14:11.766 "strip_size_kb": 0, 00:14:11.766 "state": "online", 00:14:11.766 "raid_level": "raid1", 00:14:11.766 "superblock": false, 00:14:11.766 "num_base_bdevs": 2, 00:14:11.766 "num_base_bdevs_discovered": 2, 00:14:11.766 "num_base_bdevs_operational": 2, 00:14:11.766 "base_bdevs_list": [ 00:14:11.766 { 00:14:11.766 "name": "spare", 00:14:11.766 "uuid": "c342e328-a37c-5240-a5c5-f39e4439e01a", 00:14:11.766 "is_configured": true, 00:14:11.766 "data_offset": 0, 00:14:11.766 "data_size": 65536 00:14:11.766 }, 00:14:11.766 { 00:14:11.766 "name": "BaseBdev2", 00:14:11.766 "uuid": "7bbc18bf-af19-5cbf-a672-c1b2a6eb0062", 00:14:11.766 "is_configured": true, 00:14:11.766 "data_offset": 0, 00:14:11.766 "data_size": 65536 00:14:11.766 } 00:14:11.766 ] 00:14:11.766 }' 00:14:11.766 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.024 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:12.024 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.024 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:12.024 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:12.024 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:12.024 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.024 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:12.024 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:12.024 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.024 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.024 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.024 14:24:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.024 14:24:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.024 14:24:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.024 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.024 "name": "raid_bdev1", 00:14:12.024 "uuid": "9d738984-1515-4349-b631-3d2d943bd400", 00:14:12.024 "strip_size_kb": 0, 00:14:12.024 "state": "online", 00:14:12.024 "raid_level": "raid1", 00:14:12.024 "superblock": false, 00:14:12.024 "num_base_bdevs": 2, 00:14:12.024 "num_base_bdevs_discovered": 2, 00:14:12.024 "num_base_bdevs_operational": 2, 00:14:12.024 "base_bdevs_list": [ 00:14:12.024 { 00:14:12.024 "name": "spare", 00:14:12.024 "uuid": "c342e328-a37c-5240-a5c5-f39e4439e01a", 00:14:12.024 "is_configured": true, 00:14:12.024 "data_offset": 0, 00:14:12.024 "data_size": 65536 00:14:12.024 }, 00:14:12.024 { 00:14:12.024 "name": "BaseBdev2", 00:14:12.024 "uuid": "7bbc18bf-af19-5cbf-a672-c1b2a6eb0062", 00:14:12.024 "is_configured": true, 00:14:12.024 "data_offset": 0, 00:14:12.024 "data_size": 65536 00:14:12.024 } 00:14:12.024 ] 00:14:12.024 }' 00:14:12.024 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.024 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:12.025 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.025 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:12.025 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:12.025 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.025 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.025 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.025 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.025 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:12.025 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.025 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.025 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.025 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.025 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.025 14:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.025 14:24:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.025 14:24:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.025 14:24:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.283 14:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.283 "name": "raid_bdev1", 00:14:12.283 "uuid": "9d738984-1515-4349-b631-3d2d943bd400", 00:14:12.283 "strip_size_kb": 0, 00:14:12.283 "state": "online", 00:14:12.283 "raid_level": "raid1", 00:14:12.283 "superblock": false, 00:14:12.283 "num_base_bdevs": 2, 00:14:12.283 "num_base_bdevs_discovered": 2, 00:14:12.283 "num_base_bdevs_operational": 2, 00:14:12.283 "base_bdevs_list": [ 00:14:12.283 { 00:14:12.283 "name": "spare", 00:14:12.283 "uuid": "c342e328-a37c-5240-a5c5-f39e4439e01a", 00:14:12.283 "is_configured": true, 00:14:12.283 "data_offset": 0, 00:14:12.283 "data_size": 65536 00:14:12.283 }, 00:14:12.283 { 00:14:12.283 "name": "BaseBdev2", 00:14:12.283 "uuid": "7bbc18bf-af19-5cbf-a672-c1b2a6eb0062", 00:14:12.283 "is_configured": true, 00:14:12.283 "data_offset": 0, 00:14:12.283 "data_size": 65536 00:14:12.283 } 00:14:12.283 ] 00:14:12.283 }' 00:14:12.283 14:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.283 14:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.541 14:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:12.541 14:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.541 14:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.541 [2024-11-20 14:24:51.473178] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:12.541 [2024-11-20 14:24:51.473382] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:12.541 [2024-11-20 14:24:51.473506] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.541 [2024-11-20 14:24:51.473599] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:12.541 [2024-11-20 14:24:51.473617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:12.541 14:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.541 14:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:12.541 14:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.542 14:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.542 14:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.542 14:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.800 14:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:12.800 14:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:12.800 14:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:12.800 14:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:12.800 14:24:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.800 14:24:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:12.800 14:24:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:12.800 14:24:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:12.800 14:24:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:12.800 14:24:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:12.800 14:24:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:12.800 14:24:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:12.800 14:24:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:13.059 /dev/nbd0 00:14:13.059 14:24:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:13.059 14:24:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:13.059 14:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:13.059 14:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:13.059 14:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:13.059 14:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:13.059 14:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:13.059 14:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:13.059 14:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:13.059 14:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:13.059 14:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.059 1+0 records in 00:14:13.059 1+0 records out 00:14:13.059 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415309 s, 9.9 MB/s 00:14:13.059 14:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.059 14:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:13.059 14:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.059 14:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:13.059 14:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:13.059 14:24:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.060 14:24:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:13.060 14:24:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:13.318 /dev/nbd1 00:14:13.318 14:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:13.318 14:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:13.318 14:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:13.318 14:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:13.318 14:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:13.318 14:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:13.318 14:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:13.318 14:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:13.318 14:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:13.318 14:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:13.318 14:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.318 1+0 records in 00:14:13.318 1+0 records out 00:14:13.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040259 s, 10.2 MB/s 00:14:13.318 14:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.318 14:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:13.318 14:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.318 14:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:13.318 14:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:13.318 14:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.318 14:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:13.318 14:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:13.577 14:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:13.577 14:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.577 14:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:13.577 14:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:13.577 14:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:13.577 14:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.577 14:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:13.836 14:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:13.836 14:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:13.836 14:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:13.836 14:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:13.836 14:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:13.836 14:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:13.836 14:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:13.836 14:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.836 14:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.836 14:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:14.095 14:24:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:14.095 14:24:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:14.095 14:24:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:14.095 14:24:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:14.095 14:24:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:14.095 14:24:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:14.095 14:24:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:14.095 14:24:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:14.095 14:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:14.095 14:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75481 00:14:14.095 14:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75481 ']' 00:14:14.095 14:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75481 00:14:14.095 14:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:14.095 14:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:14.095 14:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75481 00:14:14.351 killing process with pid 75481 00:14:14.351 Received shutdown signal, test time was about 60.000000 seconds 00:14:14.351 00:14:14.351 Latency(us) 00:14:14.351 [2024-11-20T14:24:53.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.351 [2024-11-20T14:24:53.333Z] =================================================================================================================== 00:14:14.351 [2024-11-20T14:24:53.333Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:14.351 14:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:14.351 14:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:14.351 14:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75481' 00:14:14.351 14:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75481 00:14:14.351 14:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75481 00:14:14.351 [2024-11-20 14:24:53.076853] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:14.609 [2024-11-20 14:24:53.356782] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:15.544 14:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:15.544 00:14:15.544 real 0m18.638s 00:14:15.544 user 0m21.331s 00:14:15.544 sys 0m3.613s 00:14:15.544 ************************************ 00:14:15.544 END TEST raid_rebuild_test 00:14:15.544 ************************************ 00:14:15.544 14:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.544 14:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.544 14:24:54 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:14:15.544 14:24:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:15.544 14:24:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:15.544 14:24:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:15.544 ************************************ 00:14:15.544 START TEST raid_rebuild_test_sb 00:14:15.544 ************************************ 00:14:15.544 14:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:14:15.544 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:15.544 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:15.544 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:15.544 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:15.544 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:15.544 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:15.544 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.544 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:15.544 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:15.544 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.544 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:15.544 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:15.544 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.802 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:15.802 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:15.802 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:15.802 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:15.802 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:15.802 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:15.802 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:15.802 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:15.802 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:15.802 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:15.802 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:15.802 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75933 00:14:15.802 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75933 00:14:15.802 14:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:15.802 14:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75933 ']' 00:14:15.802 14:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.802 14:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:15.802 14:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.802 14:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:15.802 14:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.802 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:15.802 Zero copy mechanism will not be used. 00:14:15.802 [2024-11-20 14:24:54.643340] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:14:15.802 [2024-11-20 14:24:54.643517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75933 ] 00:14:16.103 [2024-11-20 14:24:54.834167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.103 [2024-11-20 14:24:54.990628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.399 [2024-11-20 14:24:55.196141] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.399 [2024-11-20 14:24:55.196331] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.964 BaseBdev1_malloc 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.964 [2024-11-20 14:24:55.723768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:16.964 [2024-11-20 14:24:55.723871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.964 [2024-11-20 14:24:55.723903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:16.964 [2024-11-20 14:24:55.723922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.964 [2024-11-20 14:24:55.726669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.964 [2024-11-20 14:24:55.726717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:16.964 BaseBdev1 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.964 BaseBdev2_malloc 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.964 [2024-11-20 14:24:55.780209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:16.964 [2024-11-20 14:24:55.780287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.964 [2024-11-20 14:24:55.780321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:16.964 [2024-11-20 14:24:55.780340] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.964 [2024-11-20 14:24:55.783061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.964 [2024-11-20 14:24:55.783110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:16.964 BaseBdev2 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.964 spare_malloc 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.964 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.965 spare_delay 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.965 [2024-11-20 14:24:55.853810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:16.965 [2024-11-20 14:24:55.853884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.965 [2024-11-20 14:24:55.853915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:16.965 [2024-11-20 14:24:55.853934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.965 [2024-11-20 14:24:55.856788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.965 [2024-11-20 14:24:55.856854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:16.965 spare 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.965 [2024-11-20 14:24:55.861861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:16.965 [2024-11-20 14:24:55.864516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:16.965 [2024-11-20 14:24:55.864767] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:16.965 [2024-11-20 14:24:55.864808] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:16.965 [2024-11-20 14:24:55.865145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:16.965 [2024-11-20 14:24:55.865363] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:16.965 [2024-11-20 14:24:55.865429] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:16.965 [2024-11-20 14:24:55.865621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.965 "name": "raid_bdev1", 00:14:16.965 "uuid": "c15b2e45-adf9-46f5-b1a0-02892c5d6deb", 00:14:16.965 "strip_size_kb": 0, 00:14:16.965 "state": "online", 00:14:16.965 "raid_level": "raid1", 00:14:16.965 "superblock": true, 00:14:16.965 "num_base_bdevs": 2, 00:14:16.965 "num_base_bdevs_discovered": 2, 00:14:16.965 "num_base_bdevs_operational": 2, 00:14:16.965 "base_bdevs_list": [ 00:14:16.965 { 00:14:16.965 "name": "BaseBdev1", 00:14:16.965 "uuid": "b088aa2c-01d8-5237-9f73-478298adde9a", 00:14:16.965 "is_configured": true, 00:14:16.965 "data_offset": 2048, 00:14:16.965 "data_size": 63488 00:14:16.965 }, 00:14:16.965 { 00:14:16.965 "name": "BaseBdev2", 00:14:16.965 "uuid": "ec64415e-930d-5e9a-8b81-59eb548b2e57", 00:14:16.965 "is_configured": true, 00:14:16.965 "data_offset": 2048, 00:14:16.965 "data_size": 63488 00:14:16.965 } 00:14:16.965 ] 00:14:16.965 }' 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.965 14:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.531 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:17.531 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:17.531 14:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.531 14:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.531 [2024-11-20 14:24:56.410433] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:17.531 14:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.531 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:17.531 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.531 14:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.531 14:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.531 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:17.531 14:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.531 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:17.531 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:17.531 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:17.531 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:17.531 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:17.531 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:17.531 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:17.531 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:17.531 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:17.531 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:17.531 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:17.532 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:17.532 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:17.532 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:18.100 [2024-11-20 14:24:56.794231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:18.100 /dev/nbd0 00:14:18.100 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:18.100 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:18.100 14:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:18.100 14:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:18.100 14:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:18.100 14:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:18.100 14:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:18.100 14:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:18.100 14:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:18.100 14:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:18.100 14:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:18.100 1+0 records in 00:14:18.100 1+0 records out 00:14:18.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381604 s, 10.7 MB/s 00:14:18.100 14:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.100 14:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:18.100 14:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.100 14:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:18.100 14:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:18.100 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:18.100 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:18.100 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:18.100 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:18.100 14:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:24.704 63488+0 records in 00:14:24.704 63488+0 records out 00:14:24.704 32505856 bytes (33 MB, 31 MiB) copied, 5.95019 s, 5.5 MB/s 00:14:24.704 14:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:24.704 14:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:24.704 14:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:24.704 14:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:24.704 14:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:24.704 14:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:24.704 14:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:24.705 [2024-11-20 14:25:03.110545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.705 [2024-11-20 14:25:03.148070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.705 "name": "raid_bdev1", 00:14:24.705 "uuid": "c15b2e45-adf9-46f5-b1a0-02892c5d6deb", 00:14:24.705 "strip_size_kb": 0, 00:14:24.705 "state": "online", 00:14:24.705 "raid_level": "raid1", 00:14:24.705 "superblock": true, 00:14:24.705 "num_base_bdevs": 2, 00:14:24.705 "num_base_bdevs_discovered": 1, 00:14:24.705 "num_base_bdevs_operational": 1, 00:14:24.705 "base_bdevs_list": [ 00:14:24.705 { 00:14:24.705 "name": null, 00:14:24.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.705 "is_configured": false, 00:14:24.705 "data_offset": 0, 00:14:24.705 "data_size": 63488 00:14:24.705 }, 00:14:24.705 { 00:14:24.705 "name": "BaseBdev2", 00:14:24.705 "uuid": "ec64415e-930d-5e9a-8b81-59eb548b2e57", 00:14:24.705 "is_configured": true, 00:14:24.705 "data_offset": 2048, 00:14:24.705 "data_size": 63488 00:14:24.705 } 00:14:24.705 ] 00:14:24.705 }' 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.705 14:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.705 [2024-11-20 14:25:03.668206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:24.705 [2024-11-20 14:25:03.684721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:14:24.964 14:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.964 14:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:24.964 [2024-11-20 14:25:03.687179] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:25.900 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.900 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.900 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.900 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.901 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.901 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.901 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.901 14:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.901 14:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.901 14:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.901 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.901 "name": "raid_bdev1", 00:14:25.901 "uuid": "c15b2e45-adf9-46f5-b1a0-02892c5d6deb", 00:14:25.901 "strip_size_kb": 0, 00:14:25.901 "state": "online", 00:14:25.901 "raid_level": "raid1", 00:14:25.901 "superblock": true, 00:14:25.901 "num_base_bdevs": 2, 00:14:25.901 "num_base_bdevs_discovered": 2, 00:14:25.901 "num_base_bdevs_operational": 2, 00:14:25.901 "process": { 00:14:25.901 "type": "rebuild", 00:14:25.901 "target": "spare", 00:14:25.901 "progress": { 00:14:25.901 "blocks": 20480, 00:14:25.901 "percent": 32 00:14:25.901 } 00:14:25.901 }, 00:14:25.901 "base_bdevs_list": [ 00:14:25.901 { 00:14:25.901 "name": "spare", 00:14:25.901 "uuid": "82b1533c-81d4-5f15-a8f8-54669e14b8d0", 00:14:25.901 "is_configured": true, 00:14:25.901 "data_offset": 2048, 00:14:25.901 "data_size": 63488 00:14:25.901 }, 00:14:25.901 { 00:14:25.901 "name": "BaseBdev2", 00:14:25.901 "uuid": "ec64415e-930d-5e9a-8b81-59eb548b2e57", 00:14:25.901 "is_configured": true, 00:14:25.901 "data_offset": 2048, 00:14:25.901 "data_size": 63488 00:14:25.901 } 00:14:25.901 ] 00:14:25.901 }' 00:14:25.901 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.901 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:25.901 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.901 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.901 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:25.901 14:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.901 14:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.901 [2024-11-20 14:25:04.848592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.159 [2024-11-20 14:25:04.896030] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:26.159 [2024-11-20 14:25:04.896379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.159 [2024-11-20 14:25:04.896410] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.159 [2024-11-20 14:25:04.896431] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:26.159 14:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.159 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:26.159 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.159 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.159 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.159 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.159 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:26.159 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.159 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.159 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.159 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.159 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.159 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.159 14:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.159 14:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.159 14:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.159 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.159 "name": "raid_bdev1", 00:14:26.159 "uuid": "c15b2e45-adf9-46f5-b1a0-02892c5d6deb", 00:14:26.159 "strip_size_kb": 0, 00:14:26.159 "state": "online", 00:14:26.159 "raid_level": "raid1", 00:14:26.159 "superblock": true, 00:14:26.159 "num_base_bdevs": 2, 00:14:26.159 "num_base_bdevs_discovered": 1, 00:14:26.159 "num_base_bdevs_operational": 1, 00:14:26.159 "base_bdevs_list": [ 00:14:26.159 { 00:14:26.159 "name": null, 00:14:26.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.159 "is_configured": false, 00:14:26.159 "data_offset": 0, 00:14:26.159 "data_size": 63488 00:14:26.159 }, 00:14:26.159 { 00:14:26.159 "name": "BaseBdev2", 00:14:26.159 "uuid": "ec64415e-930d-5e9a-8b81-59eb548b2e57", 00:14:26.159 "is_configured": true, 00:14:26.159 "data_offset": 2048, 00:14:26.159 "data_size": 63488 00:14:26.159 } 00:14:26.159 ] 00:14:26.159 }' 00:14:26.159 14:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.159 14:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.728 14:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:26.728 14:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.728 14:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:26.728 14:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:26.728 14:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.728 14:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.728 14:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.728 14:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.728 14:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.728 14:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.728 14:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.728 "name": "raid_bdev1", 00:14:26.728 "uuid": "c15b2e45-adf9-46f5-b1a0-02892c5d6deb", 00:14:26.728 "strip_size_kb": 0, 00:14:26.728 "state": "online", 00:14:26.728 "raid_level": "raid1", 00:14:26.728 "superblock": true, 00:14:26.728 "num_base_bdevs": 2, 00:14:26.728 "num_base_bdevs_discovered": 1, 00:14:26.728 "num_base_bdevs_operational": 1, 00:14:26.728 "base_bdevs_list": [ 00:14:26.728 { 00:14:26.728 "name": null, 00:14:26.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.728 "is_configured": false, 00:14:26.728 "data_offset": 0, 00:14:26.728 "data_size": 63488 00:14:26.728 }, 00:14:26.728 { 00:14:26.728 "name": "BaseBdev2", 00:14:26.728 "uuid": "ec64415e-930d-5e9a-8b81-59eb548b2e57", 00:14:26.728 "is_configured": true, 00:14:26.728 "data_offset": 2048, 00:14:26.728 "data_size": 63488 00:14:26.728 } 00:14:26.728 ] 00:14:26.728 }' 00:14:26.728 14:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.728 14:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:26.728 14:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.728 14:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:26.728 14:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:26.728 14:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.728 14:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.728 [2024-11-20 14:25:05.624437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.728 [2024-11-20 14:25:05.640040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:14:26.728 14:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.728 14:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:26.728 [2024-11-20 14:25:05.642625] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:27.664 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.922 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.922 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.922 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.922 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.922 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.922 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.922 14:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.922 14:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.922 14:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.922 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.922 "name": "raid_bdev1", 00:14:27.922 "uuid": "c15b2e45-adf9-46f5-b1a0-02892c5d6deb", 00:14:27.922 "strip_size_kb": 0, 00:14:27.922 "state": "online", 00:14:27.922 "raid_level": "raid1", 00:14:27.922 "superblock": true, 00:14:27.922 "num_base_bdevs": 2, 00:14:27.922 "num_base_bdevs_discovered": 2, 00:14:27.922 "num_base_bdevs_operational": 2, 00:14:27.922 "process": { 00:14:27.922 "type": "rebuild", 00:14:27.922 "target": "spare", 00:14:27.922 "progress": { 00:14:27.922 "blocks": 20480, 00:14:27.922 "percent": 32 00:14:27.922 } 00:14:27.922 }, 00:14:27.922 "base_bdevs_list": [ 00:14:27.922 { 00:14:27.922 "name": "spare", 00:14:27.922 "uuid": "82b1533c-81d4-5f15-a8f8-54669e14b8d0", 00:14:27.922 "is_configured": true, 00:14:27.922 "data_offset": 2048, 00:14:27.922 "data_size": 63488 00:14:27.922 }, 00:14:27.922 { 00:14:27.922 "name": "BaseBdev2", 00:14:27.922 "uuid": "ec64415e-930d-5e9a-8b81-59eb548b2e57", 00:14:27.922 "is_configured": true, 00:14:27.922 "data_offset": 2048, 00:14:27.922 "data_size": 63488 00:14:27.922 } 00:14:27.922 ] 00:14:27.922 }' 00:14:27.922 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.923 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.923 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.923 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.923 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:27.923 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:27.923 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:27.923 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:27.923 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:27.923 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:27.923 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=413 00:14:27.923 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:27.923 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.923 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.923 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.923 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.923 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.923 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.923 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.923 14:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.923 14:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.923 14:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.923 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.923 "name": "raid_bdev1", 00:14:27.923 "uuid": "c15b2e45-adf9-46f5-b1a0-02892c5d6deb", 00:14:27.923 "strip_size_kb": 0, 00:14:27.923 "state": "online", 00:14:27.923 "raid_level": "raid1", 00:14:27.923 "superblock": true, 00:14:27.923 "num_base_bdevs": 2, 00:14:27.923 "num_base_bdevs_discovered": 2, 00:14:27.923 "num_base_bdevs_operational": 2, 00:14:27.923 "process": { 00:14:27.923 "type": "rebuild", 00:14:27.923 "target": "spare", 00:14:27.923 "progress": { 00:14:27.923 "blocks": 22528, 00:14:27.923 "percent": 35 00:14:27.923 } 00:14:27.923 }, 00:14:27.923 "base_bdevs_list": [ 00:14:27.923 { 00:14:27.923 "name": "spare", 00:14:27.923 "uuid": "82b1533c-81d4-5f15-a8f8-54669e14b8d0", 00:14:27.923 "is_configured": true, 00:14:27.923 "data_offset": 2048, 00:14:27.923 "data_size": 63488 00:14:27.923 }, 00:14:27.923 { 00:14:27.923 "name": "BaseBdev2", 00:14:27.923 "uuid": "ec64415e-930d-5e9a-8b81-59eb548b2e57", 00:14:27.923 "is_configured": true, 00:14:27.923 "data_offset": 2048, 00:14:27.923 "data_size": 63488 00:14:27.923 } 00:14:27.923 ] 00:14:27.923 }' 00:14:27.923 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.923 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.923 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.182 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.182 14:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:29.118 14:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:29.118 14:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.118 14:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.118 14:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.118 14:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.118 14:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.118 14:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.118 14:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.118 14:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.118 14:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.118 14:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.118 14:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.118 "name": "raid_bdev1", 00:14:29.118 "uuid": "c15b2e45-adf9-46f5-b1a0-02892c5d6deb", 00:14:29.118 "strip_size_kb": 0, 00:14:29.118 "state": "online", 00:14:29.118 "raid_level": "raid1", 00:14:29.118 "superblock": true, 00:14:29.118 "num_base_bdevs": 2, 00:14:29.118 "num_base_bdevs_discovered": 2, 00:14:29.118 "num_base_bdevs_operational": 2, 00:14:29.118 "process": { 00:14:29.118 "type": "rebuild", 00:14:29.118 "target": "spare", 00:14:29.118 "progress": { 00:14:29.118 "blocks": 45056, 00:14:29.118 "percent": 70 00:14:29.118 } 00:14:29.118 }, 00:14:29.118 "base_bdevs_list": [ 00:14:29.118 { 00:14:29.118 "name": "spare", 00:14:29.118 "uuid": "82b1533c-81d4-5f15-a8f8-54669e14b8d0", 00:14:29.118 "is_configured": true, 00:14:29.118 "data_offset": 2048, 00:14:29.118 "data_size": 63488 00:14:29.118 }, 00:14:29.118 { 00:14:29.118 "name": "BaseBdev2", 00:14:29.118 "uuid": "ec64415e-930d-5e9a-8b81-59eb548b2e57", 00:14:29.118 "is_configured": true, 00:14:29.118 "data_offset": 2048, 00:14:29.118 "data_size": 63488 00:14:29.118 } 00:14:29.118 ] 00:14:29.118 }' 00:14:29.118 14:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.118 14:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.118 14:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.376 14:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.376 14:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:29.943 [2024-11-20 14:25:08.764904] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:29.943 [2024-11-20 14:25:08.765015] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:29.943 [2024-11-20 14:25:08.765167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.201 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:30.201 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.201 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.201 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.201 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.201 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.201 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.201 14:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.201 14:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.201 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.201 14:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.201 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.201 "name": "raid_bdev1", 00:14:30.201 "uuid": "c15b2e45-adf9-46f5-b1a0-02892c5d6deb", 00:14:30.201 "strip_size_kb": 0, 00:14:30.201 "state": "online", 00:14:30.201 "raid_level": "raid1", 00:14:30.201 "superblock": true, 00:14:30.201 "num_base_bdevs": 2, 00:14:30.201 "num_base_bdevs_discovered": 2, 00:14:30.201 "num_base_bdevs_operational": 2, 00:14:30.201 "base_bdevs_list": [ 00:14:30.201 { 00:14:30.201 "name": "spare", 00:14:30.201 "uuid": "82b1533c-81d4-5f15-a8f8-54669e14b8d0", 00:14:30.201 "is_configured": true, 00:14:30.201 "data_offset": 2048, 00:14:30.201 "data_size": 63488 00:14:30.201 }, 00:14:30.201 { 00:14:30.201 "name": "BaseBdev2", 00:14:30.201 "uuid": "ec64415e-930d-5e9a-8b81-59eb548b2e57", 00:14:30.201 "is_configured": true, 00:14:30.201 "data_offset": 2048, 00:14:30.201 "data_size": 63488 00:14:30.201 } 00:14:30.201 ] 00:14:30.201 }' 00:14:30.201 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.459 "name": "raid_bdev1", 00:14:30.459 "uuid": "c15b2e45-adf9-46f5-b1a0-02892c5d6deb", 00:14:30.459 "strip_size_kb": 0, 00:14:30.459 "state": "online", 00:14:30.459 "raid_level": "raid1", 00:14:30.459 "superblock": true, 00:14:30.459 "num_base_bdevs": 2, 00:14:30.459 "num_base_bdevs_discovered": 2, 00:14:30.459 "num_base_bdevs_operational": 2, 00:14:30.459 "base_bdevs_list": [ 00:14:30.459 { 00:14:30.459 "name": "spare", 00:14:30.459 "uuid": "82b1533c-81d4-5f15-a8f8-54669e14b8d0", 00:14:30.459 "is_configured": true, 00:14:30.459 "data_offset": 2048, 00:14:30.459 "data_size": 63488 00:14:30.459 }, 00:14:30.459 { 00:14:30.459 "name": "BaseBdev2", 00:14:30.459 "uuid": "ec64415e-930d-5e9a-8b81-59eb548b2e57", 00:14:30.459 "is_configured": true, 00:14:30.459 "data_offset": 2048, 00:14:30.459 "data_size": 63488 00:14:30.459 } 00:14:30.459 ] 00:14:30.459 }' 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.459 14:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.737 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.737 "name": "raid_bdev1", 00:14:30.737 "uuid": "c15b2e45-adf9-46f5-b1a0-02892c5d6deb", 00:14:30.737 "strip_size_kb": 0, 00:14:30.737 "state": "online", 00:14:30.737 "raid_level": "raid1", 00:14:30.737 "superblock": true, 00:14:30.737 "num_base_bdevs": 2, 00:14:30.737 "num_base_bdevs_discovered": 2, 00:14:30.737 "num_base_bdevs_operational": 2, 00:14:30.737 "base_bdevs_list": [ 00:14:30.737 { 00:14:30.737 "name": "spare", 00:14:30.737 "uuid": "82b1533c-81d4-5f15-a8f8-54669e14b8d0", 00:14:30.737 "is_configured": true, 00:14:30.737 "data_offset": 2048, 00:14:30.737 "data_size": 63488 00:14:30.737 }, 00:14:30.737 { 00:14:30.737 "name": "BaseBdev2", 00:14:30.737 "uuid": "ec64415e-930d-5e9a-8b81-59eb548b2e57", 00:14:30.737 "is_configured": true, 00:14:30.737 "data_offset": 2048, 00:14:30.737 "data_size": 63488 00:14:30.737 } 00:14:30.737 ] 00:14:30.737 }' 00:14:30.737 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.737 14:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.007 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:31.007 14:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.007 14:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.007 [2024-11-20 14:25:09.893105] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.007 [2024-11-20 14:25:09.893285] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.007 [2024-11-20 14:25:09.893495] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.007 [2024-11-20 14:25:09.893695] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.007 [2024-11-20 14:25:09.893828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:31.007 14:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.007 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.007 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:31.007 14:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.007 14:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.007 14:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.007 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:31.007 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:31.007 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:31.007 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:31.007 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.007 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:31.007 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:31.007 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:31.007 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:31.007 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:31.007 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:31.007 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:31.007 14:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:31.575 /dev/nbd0 00:14:31.575 14:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:31.575 14:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:31.575 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:31.575 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:31.575 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:31.575 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:31.575 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:31.575 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:31.575 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:31.575 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:31.575 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:31.575 1+0 records in 00:14:31.575 1+0 records out 00:14:31.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057437 s, 7.1 MB/s 00:14:31.575 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.575 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:31.575 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.575 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:31.575 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:31.575 14:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:31.575 14:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:31.575 14:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:31.834 /dev/nbd1 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:31.834 1+0 records in 00:14:31.834 1+0 records out 00:14:31.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400778 s, 10.2 MB/s 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:31.834 14:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:32.093 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:32.093 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:32.093 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:32.093 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.093 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.093 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:32.093 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:32.093 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.093 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:32.093 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:32.661 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:32.661 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:32.661 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:32.661 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.662 [2024-11-20 14:25:11.381721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:32.662 [2024-11-20 14:25:11.381930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.662 [2024-11-20 14:25:11.382150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:32.662 [2024-11-20 14:25:11.382285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.662 [2024-11-20 14:25:11.385311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.662 [2024-11-20 14:25:11.385356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:32.662 [2024-11-20 14:25:11.385512] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:32.662 [2024-11-20 14:25:11.385574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:32.662 [2024-11-20 14:25:11.385753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:32.662 spare 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.662 [2024-11-20 14:25:11.485932] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:32.662 [2024-11-20 14:25:11.486023] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:32.662 [2024-11-20 14:25:11.486472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:14:32.662 [2024-11-20 14:25:11.486743] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:32.662 [2024-11-20 14:25:11.486768] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:32.662 [2024-11-20 14:25:11.487026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.662 "name": "raid_bdev1", 00:14:32.662 "uuid": "c15b2e45-adf9-46f5-b1a0-02892c5d6deb", 00:14:32.662 "strip_size_kb": 0, 00:14:32.662 "state": "online", 00:14:32.662 "raid_level": "raid1", 00:14:32.662 "superblock": true, 00:14:32.662 "num_base_bdevs": 2, 00:14:32.662 "num_base_bdevs_discovered": 2, 00:14:32.662 "num_base_bdevs_operational": 2, 00:14:32.662 "base_bdevs_list": [ 00:14:32.662 { 00:14:32.662 "name": "spare", 00:14:32.662 "uuid": "82b1533c-81d4-5f15-a8f8-54669e14b8d0", 00:14:32.662 "is_configured": true, 00:14:32.662 "data_offset": 2048, 00:14:32.662 "data_size": 63488 00:14:32.662 }, 00:14:32.662 { 00:14:32.662 "name": "BaseBdev2", 00:14:32.662 "uuid": "ec64415e-930d-5e9a-8b81-59eb548b2e57", 00:14:32.662 "is_configured": true, 00:14:32.662 "data_offset": 2048, 00:14:32.662 "data_size": 63488 00:14:32.662 } 00:14:32.662 ] 00:14:32.662 }' 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.662 14:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.230 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:33.230 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.230 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:33.230 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:33.230 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.230 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.230 14:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.230 14:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.230 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.230 14:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.230 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.230 "name": "raid_bdev1", 00:14:33.230 "uuid": "c15b2e45-adf9-46f5-b1a0-02892c5d6deb", 00:14:33.230 "strip_size_kb": 0, 00:14:33.230 "state": "online", 00:14:33.230 "raid_level": "raid1", 00:14:33.230 "superblock": true, 00:14:33.230 "num_base_bdevs": 2, 00:14:33.230 "num_base_bdevs_discovered": 2, 00:14:33.230 "num_base_bdevs_operational": 2, 00:14:33.230 "base_bdevs_list": [ 00:14:33.230 { 00:14:33.230 "name": "spare", 00:14:33.230 "uuid": "82b1533c-81d4-5f15-a8f8-54669e14b8d0", 00:14:33.230 "is_configured": true, 00:14:33.230 "data_offset": 2048, 00:14:33.230 "data_size": 63488 00:14:33.230 }, 00:14:33.230 { 00:14:33.230 "name": "BaseBdev2", 00:14:33.230 "uuid": "ec64415e-930d-5e9a-8b81-59eb548b2e57", 00:14:33.230 "is_configured": true, 00:14:33.230 "data_offset": 2048, 00:14:33.230 "data_size": 63488 00:14:33.230 } 00:14:33.230 ] 00:14:33.230 }' 00:14:33.230 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.230 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:33.230 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.230 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:33.230 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.230 14:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.230 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:33.230 14:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.230 14:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.489 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.489 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:33.489 14:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.489 14:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.489 [2024-11-20 14:25:12.222143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.489 14:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.489 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:33.489 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.489 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.489 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.489 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.489 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:33.489 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.489 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.489 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.489 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.489 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.489 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.489 14:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.489 14:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.489 14:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.489 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.489 "name": "raid_bdev1", 00:14:33.489 "uuid": "c15b2e45-adf9-46f5-b1a0-02892c5d6deb", 00:14:33.489 "strip_size_kb": 0, 00:14:33.489 "state": "online", 00:14:33.489 "raid_level": "raid1", 00:14:33.489 "superblock": true, 00:14:33.489 "num_base_bdevs": 2, 00:14:33.489 "num_base_bdevs_discovered": 1, 00:14:33.489 "num_base_bdevs_operational": 1, 00:14:33.489 "base_bdevs_list": [ 00:14:33.489 { 00:14:33.489 "name": null, 00:14:33.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.489 "is_configured": false, 00:14:33.489 "data_offset": 0, 00:14:33.489 "data_size": 63488 00:14:33.489 }, 00:14:33.489 { 00:14:33.489 "name": "BaseBdev2", 00:14:33.489 "uuid": "ec64415e-930d-5e9a-8b81-59eb548b2e57", 00:14:33.489 "is_configured": true, 00:14:33.489 "data_offset": 2048, 00:14:33.489 "data_size": 63488 00:14:33.489 } 00:14:33.489 ] 00:14:33.489 }' 00:14:33.489 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.489 14:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.057 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:34.057 14:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.057 14:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.057 [2024-11-20 14:25:12.754412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:34.057 [2024-11-20 14:25:12.754807] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:34.057 [2024-11-20 14:25:12.754842] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:34.057 [2024-11-20 14:25:12.754897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:34.057 [2024-11-20 14:25:12.770188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:14:34.057 14:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.057 14:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:34.057 [2024-11-20 14:25:12.772710] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:35.042 14:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.042 14:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.042 14:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.042 14:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.042 14:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.042 14:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.042 14:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.042 14:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.042 14:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.042 14:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.042 14:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.042 "name": "raid_bdev1", 00:14:35.042 "uuid": "c15b2e45-adf9-46f5-b1a0-02892c5d6deb", 00:14:35.042 "strip_size_kb": 0, 00:14:35.042 "state": "online", 00:14:35.042 "raid_level": "raid1", 00:14:35.042 "superblock": true, 00:14:35.042 "num_base_bdevs": 2, 00:14:35.042 "num_base_bdevs_discovered": 2, 00:14:35.042 "num_base_bdevs_operational": 2, 00:14:35.042 "process": { 00:14:35.042 "type": "rebuild", 00:14:35.042 "target": "spare", 00:14:35.042 "progress": { 00:14:35.042 "blocks": 20480, 00:14:35.042 "percent": 32 00:14:35.043 } 00:14:35.043 }, 00:14:35.043 "base_bdevs_list": [ 00:14:35.043 { 00:14:35.043 "name": "spare", 00:14:35.043 "uuid": "82b1533c-81d4-5f15-a8f8-54669e14b8d0", 00:14:35.043 "is_configured": true, 00:14:35.043 "data_offset": 2048, 00:14:35.043 "data_size": 63488 00:14:35.043 }, 00:14:35.043 { 00:14:35.043 "name": "BaseBdev2", 00:14:35.043 "uuid": "ec64415e-930d-5e9a-8b81-59eb548b2e57", 00:14:35.043 "is_configured": true, 00:14:35.043 "data_offset": 2048, 00:14:35.043 "data_size": 63488 00:14:35.043 } 00:14:35.043 ] 00:14:35.043 }' 00:14:35.043 14:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.043 14:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.043 14:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.043 14:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.043 14:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:35.043 14:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.043 14:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.043 [2024-11-20 14:25:13.938252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:35.043 [2024-11-20 14:25:13.981402] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:35.043 [2024-11-20 14:25:13.981730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.043 [2024-11-20 14:25:13.981760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:35.043 [2024-11-20 14:25:13.981776] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:35.043 14:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.043 14:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:35.043 14:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.043 14:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.043 14:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.043 14:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.043 14:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:35.043 14:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.043 14:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.043 14:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.043 14:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.043 14:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.043 14:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.043 14:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.043 14:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.301 14:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.301 14:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.301 "name": "raid_bdev1", 00:14:35.301 "uuid": "c15b2e45-adf9-46f5-b1a0-02892c5d6deb", 00:14:35.301 "strip_size_kb": 0, 00:14:35.301 "state": "online", 00:14:35.301 "raid_level": "raid1", 00:14:35.301 "superblock": true, 00:14:35.301 "num_base_bdevs": 2, 00:14:35.301 "num_base_bdevs_discovered": 1, 00:14:35.301 "num_base_bdevs_operational": 1, 00:14:35.301 "base_bdevs_list": [ 00:14:35.301 { 00:14:35.301 "name": null, 00:14:35.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.301 "is_configured": false, 00:14:35.301 "data_offset": 0, 00:14:35.301 "data_size": 63488 00:14:35.301 }, 00:14:35.301 { 00:14:35.301 "name": "BaseBdev2", 00:14:35.301 "uuid": "ec64415e-930d-5e9a-8b81-59eb548b2e57", 00:14:35.301 "is_configured": true, 00:14:35.301 "data_offset": 2048, 00:14:35.301 "data_size": 63488 00:14:35.301 } 00:14:35.301 ] 00:14:35.301 }' 00:14:35.301 14:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.301 14:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.869 14:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:35.869 14:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.869 14:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.869 [2024-11-20 14:25:14.573547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:35.869 [2024-11-20 14:25:14.573764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.869 [2024-11-20 14:25:14.573807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:35.869 [2024-11-20 14:25:14.573827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.869 [2024-11-20 14:25:14.574428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.869 [2024-11-20 14:25:14.574462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:35.869 [2024-11-20 14:25:14.574583] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:35.869 [2024-11-20 14:25:14.574606] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:35.869 [2024-11-20 14:25:14.574621] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:35.869 [2024-11-20 14:25:14.574655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:35.869 spare 00:14:35.869 [2024-11-20 14:25:14.590005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:35.869 14:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.869 14:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:35.869 [2024-11-20 14:25:14.592528] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:36.805 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.805 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.805 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.805 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.805 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.806 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.806 14:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.806 14:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.806 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.806 14:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.806 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.806 "name": "raid_bdev1", 00:14:36.806 "uuid": "c15b2e45-adf9-46f5-b1a0-02892c5d6deb", 00:14:36.806 "strip_size_kb": 0, 00:14:36.806 "state": "online", 00:14:36.806 "raid_level": "raid1", 00:14:36.806 "superblock": true, 00:14:36.806 "num_base_bdevs": 2, 00:14:36.806 "num_base_bdevs_discovered": 2, 00:14:36.806 "num_base_bdevs_operational": 2, 00:14:36.806 "process": { 00:14:36.806 "type": "rebuild", 00:14:36.806 "target": "spare", 00:14:36.806 "progress": { 00:14:36.806 "blocks": 20480, 00:14:36.806 "percent": 32 00:14:36.806 } 00:14:36.806 }, 00:14:36.806 "base_bdevs_list": [ 00:14:36.806 { 00:14:36.806 "name": "spare", 00:14:36.806 "uuid": "82b1533c-81d4-5f15-a8f8-54669e14b8d0", 00:14:36.806 "is_configured": true, 00:14:36.806 "data_offset": 2048, 00:14:36.806 "data_size": 63488 00:14:36.806 }, 00:14:36.806 { 00:14:36.806 "name": "BaseBdev2", 00:14:36.806 "uuid": "ec64415e-930d-5e9a-8b81-59eb548b2e57", 00:14:36.806 "is_configured": true, 00:14:36.806 "data_offset": 2048, 00:14:36.806 "data_size": 63488 00:14:36.806 } 00:14:36.806 ] 00:14:36.806 }' 00:14:36.806 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.806 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.806 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.806 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.806 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:36.806 14:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.806 14:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.806 [2024-11-20 14:25:15.758205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:37.065 [2024-11-20 14:25:15.801424] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:37.065 [2024-11-20 14:25:15.801745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.065 [2024-11-20 14:25:15.801782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:37.065 [2024-11-20 14:25:15.801796] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:37.065 14:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.065 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:37.065 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.065 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.065 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.065 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.065 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:37.065 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.065 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.065 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.065 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.065 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.065 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.065 14:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.065 14:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.065 14:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.065 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.065 "name": "raid_bdev1", 00:14:37.065 "uuid": "c15b2e45-adf9-46f5-b1a0-02892c5d6deb", 00:14:37.065 "strip_size_kb": 0, 00:14:37.065 "state": "online", 00:14:37.065 "raid_level": "raid1", 00:14:37.065 "superblock": true, 00:14:37.065 "num_base_bdevs": 2, 00:14:37.065 "num_base_bdevs_discovered": 1, 00:14:37.065 "num_base_bdevs_operational": 1, 00:14:37.065 "base_bdevs_list": [ 00:14:37.065 { 00:14:37.065 "name": null, 00:14:37.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.065 "is_configured": false, 00:14:37.065 "data_offset": 0, 00:14:37.065 "data_size": 63488 00:14:37.065 }, 00:14:37.065 { 00:14:37.065 "name": "BaseBdev2", 00:14:37.065 "uuid": "ec64415e-930d-5e9a-8b81-59eb548b2e57", 00:14:37.065 "is_configured": true, 00:14:37.065 "data_offset": 2048, 00:14:37.065 "data_size": 63488 00:14:37.065 } 00:14:37.065 ] 00:14:37.065 }' 00:14:37.065 14:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.065 14:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.634 14:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:37.634 14:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.634 14:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:37.634 14:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:37.634 14:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.634 14:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.634 14:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.634 14:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.634 14:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.634 14:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.634 14:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.634 "name": "raid_bdev1", 00:14:37.634 "uuid": "c15b2e45-adf9-46f5-b1a0-02892c5d6deb", 00:14:37.634 "strip_size_kb": 0, 00:14:37.634 "state": "online", 00:14:37.634 "raid_level": "raid1", 00:14:37.634 "superblock": true, 00:14:37.634 "num_base_bdevs": 2, 00:14:37.634 "num_base_bdevs_discovered": 1, 00:14:37.634 "num_base_bdevs_operational": 1, 00:14:37.634 "base_bdevs_list": [ 00:14:37.634 { 00:14:37.634 "name": null, 00:14:37.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.634 "is_configured": false, 00:14:37.634 "data_offset": 0, 00:14:37.634 "data_size": 63488 00:14:37.634 }, 00:14:37.634 { 00:14:37.634 "name": "BaseBdev2", 00:14:37.634 "uuid": "ec64415e-930d-5e9a-8b81-59eb548b2e57", 00:14:37.634 "is_configured": true, 00:14:37.634 "data_offset": 2048, 00:14:37.634 "data_size": 63488 00:14:37.634 } 00:14:37.634 ] 00:14:37.634 }' 00:14:37.634 14:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.634 14:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:37.634 14:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.634 14:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:37.634 14:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:37.634 14:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.634 14:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.634 14:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.634 14:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:37.634 14:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.634 14:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.634 [2024-11-20 14:25:16.565600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:37.634 [2024-11-20 14:25:16.565792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.634 [2024-11-20 14:25:16.565942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:37.634 [2024-11-20 14:25:16.565977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.634 [2024-11-20 14:25:16.566568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.634 [2024-11-20 14:25:16.566601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:37.634 [2024-11-20 14:25:16.566712] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:37.634 [2024-11-20 14:25:16.566735] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:37.634 [2024-11-20 14:25:16.566750] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:37.634 [2024-11-20 14:25:16.566763] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:37.634 BaseBdev1 00:14:37.634 14:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.634 14:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:39.014 14:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:39.014 14:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.014 14:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.014 14:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.014 14:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.014 14:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:39.014 14:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.014 14:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.014 14:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.014 14:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.014 14:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.014 14:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.014 14:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.014 14:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.014 14:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.014 14:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.014 "name": "raid_bdev1", 00:14:39.014 "uuid": "c15b2e45-adf9-46f5-b1a0-02892c5d6deb", 00:14:39.014 "strip_size_kb": 0, 00:14:39.014 "state": "online", 00:14:39.014 "raid_level": "raid1", 00:14:39.014 "superblock": true, 00:14:39.014 "num_base_bdevs": 2, 00:14:39.014 "num_base_bdevs_discovered": 1, 00:14:39.014 "num_base_bdevs_operational": 1, 00:14:39.014 "base_bdevs_list": [ 00:14:39.014 { 00:14:39.014 "name": null, 00:14:39.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.014 "is_configured": false, 00:14:39.014 "data_offset": 0, 00:14:39.014 "data_size": 63488 00:14:39.014 }, 00:14:39.014 { 00:14:39.014 "name": "BaseBdev2", 00:14:39.014 "uuid": "ec64415e-930d-5e9a-8b81-59eb548b2e57", 00:14:39.014 "is_configured": true, 00:14:39.014 "data_offset": 2048, 00:14:39.014 "data_size": 63488 00:14:39.014 } 00:14:39.014 ] 00:14:39.014 }' 00:14:39.014 14:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.014 14:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.271 14:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:39.271 14:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.271 14:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:39.271 14:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:39.271 14:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.271 14:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.271 14:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.271 14:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.271 14:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.271 14:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.271 14:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.271 "name": "raid_bdev1", 00:14:39.271 "uuid": "c15b2e45-adf9-46f5-b1a0-02892c5d6deb", 00:14:39.271 "strip_size_kb": 0, 00:14:39.271 "state": "online", 00:14:39.271 "raid_level": "raid1", 00:14:39.271 "superblock": true, 00:14:39.271 "num_base_bdevs": 2, 00:14:39.271 "num_base_bdevs_discovered": 1, 00:14:39.271 "num_base_bdevs_operational": 1, 00:14:39.271 "base_bdevs_list": [ 00:14:39.271 { 00:14:39.271 "name": null, 00:14:39.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.271 "is_configured": false, 00:14:39.271 "data_offset": 0, 00:14:39.271 "data_size": 63488 00:14:39.271 }, 00:14:39.271 { 00:14:39.271 "name": "BaseBdev2", 00:14:39.271 "uuid": "ec64415e-930d-5e9a-8b81-59eb548b2e57", 00:14:39.271 "is_configured": true, 00:14:39.271 "data_offset": 2048, 00:14:39.271 "data_size": 63488 00:14:39.271 } 00:14:39.271 ] 00:14:39.271 }' 00:14:39.271 14:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.271 14:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:39.272 14:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.529 14:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:39.529 14:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:39.529 14:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:39.529 14:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:39.529 14:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:39.529 14:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:39.529 14:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:39.529 14:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:39.529 14:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:39.529 14:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.529 14:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.529 [2024-11-20 14:25:18.290129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:39.529 [2024-11-20 14:25:18.290479] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:39.529 [2024-11-20 14:25:18.290512] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:39.529 request: 00:14:39.529 { 00:14:39.529 "base_bdev": "BaseBdev1", 00:14:39.529 "raid_bdev": "raid_bdev1", 00:14:39.529 "method": "bdev_raid_add_base_bdev", 00:14:39.529 "req_id": 1 00:14:39.529 } 00:14:39.529 Got JSON-RPC error response 00:14:39.529 response: 00:14:39.529 { 00:14:39.529 "code": -22, 00:14:39.529 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:39.529 } 00:14:39.529 14:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:39.529 14:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:39.529 14:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:39.530 14:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:39.530 14:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:39.530 14:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:40.465 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:40.465 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.465 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.465 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.465 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.465 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:40.465 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.465 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.465 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.465 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.465 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.465 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.465 14:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.465 14:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.465 14:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.465 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.465 "name": "raid_bdev1", 00:14:40.465 "uuid": "c15b2e45-adf9-46f5-b1a0-02892c5d6deb", 00:14:40.465 "strip_size_kb": 0, 00:14:40.465 "state": "online", 00:14:40.465 "raid_level": "raid1", 00:14:40.465 "superblock": true, 00:14:40.465 "num_base_bdevs": 2, 00:14:40.465 "num_base_bdevs_discovered": 1, 00:14:40.465 "num_base_bdevs_operational": 1, 00:14:40.465 "base_bdevs_list": [ 00:14:40.465 { 00:14:40.465 "name": null, 00:14:40.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.465 "is_configured": false, 00:14:40.465 "data_offset": 0, 00:14:40.465 "data_size": 63488 00:14:40.465 }, 00:14:40.465 { 00:14:40.465 "name": "BaseBdev2", 00:14:40.465 "uuid": "ec64415e-930d-5e9a-8b81-59eb548b2e57", 00:14:40.465 "is_configured": true, 00:14:40.465 "data_offset": 2048, 00:14:40.465 "data_size": 63488 00:14:40.465 } 00:14:40.466 ] 00:14:40.466 }' 00:14:40.466 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.466 14:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.033 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:41.033 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.033 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:41.033 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:41.033 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.033 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.033 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.033 14:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.033 14:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.033 14:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.033 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.033 "name": "raid_bdev1", 00:14:41.033 "uuid": "c15b2e45-adf9-46f5-b1a0-02892c5d6deb", 00:14:41.033 "strip_size_kb": 0, 00:14:41.033 "state": "online", 00:14:41.033 "raid_level": "raid1", 00:14:41.033 "superblock": true, 00:14:41.033 "num_base_bdevs": 2, 00:14:41.033 "num_base_bdevs_discovered": 1, 00:14:41.033 "num_base_bdevs_operational": 1, 00:14:41.033 "base_bdevs_list": [ 00:14:41.033 { 00:14:41.033 "name": null, 00:14:41.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.033 "is_configured": false, 00:14:41.033 "data_offset": 0, 00:14:41.033 "data_size": 63488 00:14:41.033 }, 00:14:41.033 { 00:14:41.033 "name": "BaseBdev2", 00:14:41.033 "uuid": "ec64415e-930d-5e9a-8b81-59eb548b2e57", 00:14:41.033 "is_configured": true, 00:14:41.033 "data_offset": 2048, 00:14:41.033 "data_size": 63488 00:14:41.033 } 00:14:41.033 ] 00:14:41.033 }' 00:14:41.033 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.033 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:41.033 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.033 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:41.033 14:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75933 00:14:41.033 14:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75933 ']' 00:14:41.033 14:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75933 00:14:41.033 14:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:41.033 14:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:41.033 14:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75933 00:14:41.033 killing process with pid 75933 00:14:41.034 Received shutdown signal, test time was about 60.000000 seconds 00:14:41.034 00:14:41.034 Latency(us) 00:14:41.034 [2024-11-20T14:25:20.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.034 [2024-11-20T14:25:20.016Z] =================================================================================================================== 00:14:41.034 [2024-11-20T14:25:20.016Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:41.034 14:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:41.034 14:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:41.034 14:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75933' 00:14:41.034 14:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75933 00:14:41.034 [2024-11-20 14:25:19.983327] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:41.034 14:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75933 00:14:41.034 [2024-11-20 14:25:19.983487] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.034 [2024-11-20 14:25:19.983555] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.034 [2024-11-20 14:25:19.983576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:41.292 [2024-11-20 14:25:20.257009] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:42.670 00:14:42.670 real 0m26.787s 00:14:42.670 user 0m33.156s 00:14:42.670 sys 0m3.910s 00:14:42.670 ************************************ 00:14:42.670 END TEST raid_rebuild_test_sb 00:14:42.670 ************************************ 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.670 14:25:21 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:14:42.670 14:25:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:42.670 14:25:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:42.670 14:25:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:42.670 ************************************ 00:14:42.670 START TEST raid_rebuild_test_io 00:14:42.670 ************************************ 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76700 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76700 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76700 ']' 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:42.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:42.670 14:25:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.670 [2024-11-20 14:25:21.474716] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:14:42.670 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:42.670 Zero copy mechanism will not be used. 00:14:42.670 [2024-11-20 14:25:21.475160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76700 ] 00:14:42.929 [2024-11-20 14:25:21.653657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.929 [2024-11-20 14:25:21.788664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.188 [2024-11-20 14:25:21.990602] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.188 [2024-11-20 14:25:21.990894] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.447 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:43.447 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:43.447 14:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:43.447 14:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:43.447 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.447 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.447 BaseBdev1_malloc 00:14:43.447 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.447 14:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:43.447 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.447 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.706 [2024-11-20 14:25:22.432261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:43.706 [2024-11-20 14:25:22.432348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.706 [2024-11-20 14:25:22.432383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:43.706 [2024-11-20 14:25:22.432403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.706 [2024-11-20 14:25:22.435264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.706 [2024-11-20 14:25:22.435326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:43.706 BaseBdev1 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.706 BaseBdev2_malloc 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.706 [2024-11-20 14:25:22.484400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:43.706 [2024-11-20 14:25:22.484490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.706 [2024-11-20 14:25:22.484528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:43.706 [2024-11-20 14:25:22.484547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.706 [2024-11-20 14:25:22.487460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.706 [2024-11-20 14:25:22.487513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:43.706 BaseBdev2 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.706 spare_malloc 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.706 spare_delay 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.706 [2024-11-20 14:25:22.555534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:43.706 [2024-11-20 14:25:22.555623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.706 [2024-11-20 14:25:22.555659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:43.706 [2024-11-20 14:25:22.555678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.706 [2024-11-20 14:25:22.558632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.706 [2024-11-20 14:25:22.558838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:43.706 spare 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.706 [2024-11-20 14:25:22.563665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:43.706 [2024-11-20 14:25:22.566159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:43.706 [2024-11-20 14:25:22.566315] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:43.706 [2024-11-20 14:25:22.566339] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:43.706 [2024-11-20 14:25:22.566688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:43.706 [2024-11-20 14:25:22.566909] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:43.706 [2024-11-20 14:25:22.566928] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:43.706 [2024-11-20 14:25:22.567186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.706 "name": "raid_bdev1", 00:14:43.706 "uuid": "9c98992c-d194-44c9-91ae-0fee1b2ef5e1", 00:14:43.706 "strip_size_kb": 0, 00:14:43.706 "state": "online", 00:14:43.706 "raid_level": "raid1", 00:14:43.706 "superblock": false, 00:14:43.706 "num_base_bdevs": 2, 00:14:43.706 "num_base_bdevs_discovered": 2, 00:14:43.706 "num_base_bdevs_operational": 2, 00:14:43.706 "base_bdevs_list": [ 00:14:43.706 { 00:14:43.706 "name": "BaseBdev1", 00:14:43.706 "uuid": "991629d4-3c0b-5e3b-9f98-a6794a62ffd6", 00:14:43.706 "is_configured": true, 00:14:43.706 "data_offset": 0, 00:14:43.706 "data_size": 65536 00:14:43.706 }, 00:14:43.706 { 00:14:43.706 "name": "BaseBdev2", 00:14:43.706 "uuid": "b0ab18bd-980c-5ac7-887e-264f26b53ced", 00:14:43.706 "is_configured": true, 00:14:43.706 "data_offset": 0, 00:14:43.706 "data_size": 65536 00:14:43.706 } 00:14:43.706 ] 00:14:43.706 }' 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.706 14:25:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.275 [2024-11-20 14:25:23.060136] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.275 [2024-11-20 14:25:23.151775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.275 "name": "raid_bdev1", 00:14:44.275 "uuid": "9c98992c-d194-44c9-91ae-0fee1b2ef5e1", 00:14:44.275 "strip_size_kb": 0, 00:14:44.275 "state": "online", 00:14:44.275 "raid_level": "raid1", 00:14:44.275 "superblock": false, 00:14:44.275 "num_base_bdevs": 2, 00:14:44.275 "num_base_bdevs_discovered": 1, 00:14:44.275 "num_base_bdevs_operational": 1, 00:14:44.275 "base_bdevs_list": [ 00:14:44.275 { 00:14:44.275 "name": null, 00:14:44.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.275 "is_configured": false, 00:14:44.275 "data_offset": 0, 00:14:44.275 "data_size": 65536 00:14:44.275 }, 00:14:44.275 { 00:14:44.275 "name": "BaseBdev2", 00:14:44.275 "uuid": "b0ab18bd-980c-5ac7-887e-264f26b53ced", 00:14:44.275 "is_configured": true, 00:14:44.275 "data_offset": 0, 00:14:44.275 "data_size": 65536 00:14:44.275 } 00:14:44.275 ] 00:14:44.275 }' 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.275 14:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.536 [2024-11-20 14:25:23.279882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:44.536 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:44.536 Zero copy mechanism will not be used. 00:14:44.536 Running I/O for 60 seconds... 00:14:44.797 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:44.797 14:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.797 14:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.797 [2024-11-20 14:25:23.634449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:44.797 14:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.797 14:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:44.797 [2024-11-20 14:25:23.699488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:44.797 [2024-11-20 14:25:23.702329] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:45.056 [2024-11-20 14:25:23.828352] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:45.056 [2024-11-20 14:25:23.829166] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:45.315 [2024-11-20 14:25:24.049500] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:45.315 [2024-11-20 14:25:24.049891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:45.573 135.00 IOPS, 405.00 MiB/s [2024-11-20T14:25:24.555Z] [2024-11-20 14:25:24.385163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:45.573 [2024-11-20 14:25:24.385846] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:45.847 [2024-11-20 14:25:24.596238] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:45.847 [2024-11-20 14:25:24.596871] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:45.847 14:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.847 14:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.847 14:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.847 14:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.847 14:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.847 14:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.847 14:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.847 14:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.847 14:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.847 14:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.847 14:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.847 "name": "raid_bdev1", 00:14:45.847 "uuid": "9c98992c-d194-44c9-91ae-0fee1b2ef5e1", 00:14:45.847 "strip_size_kb": 0, 00:14:45.847 "state": "online", 00:14:45.847 "raid_level": "raid1", 00:14:45.847 "superblock": false, 00:14:45.847 "num_base_bdevs": 2, 00:14:45.847 "num_base_bdevs_discovered": 2, 00:14:45.847 "num_base_bdevs_operational": 2, 00:14:45.847 "process": { 00:14:45.847 "type": "rebuild", 00:14:45.847 "target": "spare", 00:14:45.847 "progress": { 00:14:45.847 "blocks": 10240, 00:14:45.847 "percent": 15 00:14:45.847 } 00:14:45.847 }, 00:14:45.847 "base_bdevs_list": [ 00:14:45.847 { 00:14:45.847 "name": "spare", 00:14:45.847 "uuid": "abf4a5fb-0af4-5f25-91a3-65839f2a36f2", 00:14:45.847 "is_configured": true, 00:14:45.847 "data_offset": 0, 00:14:45.847 "data_size": 65536 00:14:45.847 }, 00:14:45.847 { 00:14:45.847 "name": "BaseBdev2", 00:14:45.847 "uuid": "b0ab18bd-980c-5ac7-887e-264f26b53ced", 00:14:45.847 "is_configured": true, 00:14:45.847 "data_offset": 0, 00:14:45.847 "data_size": 65536 00:14:45.847 } 00:14:45.847 ] 00:14:45.847 }' 00:14:45.847 14:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.847 14:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.847 14:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.106 14:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.106 14:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:46.106 14:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.106 14:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.106 [2024-11-20 14:25:24.844913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.106 [2024-11-20 14:25:24.964160] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:46.106 [2024-11-20 14:25:24.966202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.106 [2024-11-20 14:25:24.966346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.106 [2024-11-20 14:25:24.966407] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:46.106 [2024-11-20 14:25:25.025419] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:46.106 14:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.106 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:46.106 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.106 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.107 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.107 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.107 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:46.107 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.107 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.107 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.107 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.107 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.107 14:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.107 14:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.107 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.107 14:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.366 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.366 "name": "raid_bdev1", 00:14:46.366 "uuid": "9c98992c-d194-44c9-91ae-0fee1b2ef5e1", 00:14:46.366 "strip_size_kb": 0, 00:14:46.366 "state": "online", 00:14:46.366 "raid_level": "raid1", 00:14:46.366 "superblock": false, 00:14:46.366 "num_base_bdevs": 2, 00:14:46.366 "num_base_bdevs_discovered": 1, 00:14:46.366 "num_base_bdevs_operational": 1, 00:14:46.366 "base_bdevs_list": [ 00:14:46.366 { 00:14:46.366 "name": null, 00:14:46.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.366 "is_configured": false, 00:14:46.366 "data_offset": 0, 00:14:46.366 "data_size": 65536 00:14:46.366 }, 00:14:46.366 { 00:14:46.366 "name": "BaseBdev2", 00:14:46.366 "uuid": "b0ab18bd-980c-5ac7-887e-264f26b53ced", 00:14:46.366 "is_configured": true, 00:14:46.366 "data_offset": 0, 00:14:46.366 "data_size": 65536 00:14:46.366 } 00:14:46.366 ] 00:14:46.366 }' 00:14:46.366 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.366 14:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.624 121.00 IOPS, 363.00 MiB/s [2024-11-20T14:25:25.606Z] 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:46.624 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.624 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:46.624 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:46.624 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.624 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.624 14:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.624 14:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.624 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.884 14:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.884 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.884 "name": "raid_bdev1", 00:14:46.884 "uuid": "9c98992c-d194-44c9-91ae-0fee1b2ef5e1", 00:14:46.884 "strip_size_kb": 0, 00:14:46.884 "state": "online", 00:14:46.885 "raid_level": "raid1", 00:14:46.885 "superblock": false, 00:14:46.885 "num_base_bdevs": 2, 00:14:46.885 "num_base_bdevs_discovered": 1, 00:14:46.885 "num_base_bdevs_operational": 1, 00:14:46.885 "base_bdevs_list": [ 00:14:46.885 { 00:14:46.885 "name": null, 00:14:46.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.885 "is_configured": false, 00:14:46.885 "data_offset": 0, 00:14:46.885 "data_size": 65536 00:14:46.885 }, 00:14:46.885 { 00:14:46.885 "name": "BaseBdev2", 00:14:46.885 "uuid": "b0ab18bd-980c-5ac7-887e-264f26b53ced", 00:14:46.885 "is_configured": true, 00:14:46.885 "data_offset": 0, 00:14:46.885 "data_size": 65536 00:14:46.885 } 00:14:46.885 ] 00:14:46.885 }' 00:14:46.885 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.885 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:46.885 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.885 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:46.885 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:46.885 14:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.885 14:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.885 [2024-11-20 14:25:25.739648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:46.885 14:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.885 14:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:46.885 [2024-11-20 14:25:25.800449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:46.885 [2024-11-20 14:25:25.802937] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:47.144 [2024-11-20 14:25:25.912168] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:47.144 [2024-11-20 14:25:25.912818] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:47.403 [2024-11-20 14:25:26.136172] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:47.403 [2024-11-20 14:25:26.136535] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:47.661 140.33 IOPS, 421.00 MiB/s [2024-11-20T14:25:26.643Z] [2024-11-20 14:25:26.499391] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:47.920 [2024-11-20 14:25:26.747943] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:47.920 [2024-11-20 14:25:26.748346] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:47.920 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.920 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.920 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.920 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.920 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.920 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.920 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.920 14:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.920 14:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.920 14:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.920 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.920 "name": "raid_bdev1", 00:14:47.920 "uuid": "9c98992c-d194-44c9-91ae-0fee1b2ef5e1", 00:14:47.920 "strip_size_kb": 0, 00:14:47.920 "state": "online", 00:14:47.920 "raid_level": "raid1", 00:14:47.920 "superblock": false, 00:14:47.920 "num_base_bdevs": 2, 00:14:47.920 "num_base_bdevs_discovered": 2, 00:14:47.920 "num_base_bdevs_operational": 2, 00:14:47.920 "process": { 00:14:47.920 "type": "rebuild", 00:14:47.920 "target": "spare", 00:14:47.920 "progress": { 00:14:47.920 "blocks": 10240, 00:14:47.920 "percent": 15 00:14:47.920 } 00:14:47.920 }, 00:14:47.920 "base_bdevs_list": [ 00:14:47.920 { 00:14:47.920 "name": "spare", 00:14:47.920 "uuid": "abf4a5fb-0af4-5f25-91a3-65839f2a36f2", 00:14:47.920 "is_configured": true, 00:14:47.920 "data_offset": 0, 00:14:47.920 "data_size": 65536 00:14:47.920 }, 00:14:47.920 { 00:14:47.920 "name": "BaseBdev2", 00:14:47.920 "uuid": "b0ab18bd-980c-5ac7-887e-264f26b53ced", 00:14:47.920 "is_configured": true, 00:14:47.920 "data_offset": 0, 00:14:47.920 "data_size": 65536 00:14:47.920 } 00:14:47.920 ] 00:14:47.920 }' 00:14:47.920 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.920 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.920 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.179 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.179 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:48.179 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:48.180 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:48.180 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:48.180 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=433 00:14:48.180 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:48.180 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.180 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.180 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.180 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.180 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.180 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.180 14:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.180 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.180 14:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.180 14:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.180 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.180 "name": "raid_bdev1", 00:14:48.180 "uuid": "9c98992c-d194-44c9-91ae-0fee1b2ef5e1", 00:14:48.180 "strip_size_kb": 0, 00:14:48.180 "state": "online", 00:14:48.180 "raid_level": "raid1", 00:14:48.180 "superblock": false, 00:14:48.180 "num_base_bdevs": 2, 00:14:48.180 "num_base_bdevs_discovered": 2, 00:14:48.180 "num_base_bdevs_operational": 2, 00:14:48.180 "process": { 00:14:48.180 "type": "rebuild", 00:14:48.180 "target": "spare", 00:14:48.180 "progress": { 00:14:48.180 "blocks": 12288, 00:14:48.180 "percent": 18 00:14:48.180 } 00:14:48.180 }, 00:14:48.180 "base_bdevs_list": [ 00:14:48.180 { 00:14:48.180 "name": "spare", 00:14:48.180 "uuid": "abf4a5fb-0af4-5f25-91a3-65839f2a36f2", 00:14:48.180 "is_configured": true, 00:14:48.180 "data_offset": 0, 00:14:48.180 "data_size": 65536 00:14:48.180 }, 00:14:48.180 { 00:14:48.180 "name": "BaseBdev2", 00:14:48.180 "uuid": "b0ab18bd-980c-5ac7-887e-264f26b53ced", 00:14:48.180 "is_configured": true, 00:14:48.180 "data_offset": 0, 00:14:48.180 "data_size": 65536 00:14:48.180 } 00:14:48.180 ] 00:14:48.180 }' 00:14:48.180 14:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.180 14:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.180 14:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.180 14:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.180 14:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:48.180 [2024-11-20 14:25:27.138212] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:48.180 [2024-11-20 14:25:27.138762] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:48.696 136.00 IOPS, 408.00 MiB/s [2024-11-20T14:25:27.679Z] [2024-11-20 14:25:27.470419] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:48.697 [2024-11-20 14:25:27.598815] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:48.955 [2024-11-20 14:25:27.844628] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:49.214 14:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:49.214 14:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.214 14:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.214 14:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.214 14:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.214 14:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.214 14:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.214 14:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.214 14:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.214 14:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.214 14:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.214 14:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.214 "name": "raid_bdev1", 00:14:49.214 "uuid": "9c98992c-d194-44c9-91ae-0fee1b2ef5e1", 00:14:49.214 "strip_size_kb": 0, 00:14:49.214 "state": "online", 00:14:49.214 "raid_level": "raid1", 00:14:49.214 "superblock": false, 00:14:49.214 "num_base_bdevs": 2, 00:14:49.214 "num_base_bdevs_discovered": 2, 00:14:49.214 "num_base_bdevs_operational": 2, 00:14:49.214 "process": { 00:14:49.214 "type": "rebuild", 00:14:49.214 "target": "spare", 00:14:49.214 "progress": { 00:14:49.214 "blocks": 28672, 00:14:49.214 "percent": 43 00:14:49.214 } 00:14:49.214 }, 00:14:49.214 "base_bdevs_list": [ 00:14:49.214 { 00:14:49.214 "name": "spare", 00:14:49.214 "uuid": "abf4a5fb-0af4-5f25-91a3-65839f2a36f2", 00:14:49.214 "is_configured": true, 00:14:49.214 "data_offset": 0, 00:14:49.214 "data_size": 65536 00:14:49.214 }, 00:14:49.214 { 00:14:49.214 "name": "BaseBdev2", 00:14:49.214 "uuid": "b0ab18bd-980c-5ac7-887e-264f26b53ced", 00:14:49.214 "is_configured": true, 00:14:49.214 "data_offset": 0, 00:14:49.214 "data_size": 65536 00:14:49.214 } 00:14:49.214 ] 00:14:49.214 }' 00:14:49.214 14:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.473 14:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.473 14:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.473 14:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.473 14:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:49.473 118.40 IOPS, 355.20 MiB/s [2024-11-20T14:25:28.455Z] [2024-11-20 14:25:28.305996] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:49.776 [2024-11-20 14:25:28.544003] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:50.051 [2024-11-20 14:25:28.902742] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:50.310 [2024-11-20 14:25:29.249843] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:50.310 14:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:50.310 14:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.310 14:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.310 14:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.310 14:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.310 14:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.310 14:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.310 14:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.310 14:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.310 14:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.310 14:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.569 105.17 IOPS, 315.50 MiB/s [2024-11-20T14:25:29.551Z] 14:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.569 "name": "raid_bdev1", 00:14:50.569 "uuid": "9c98992c-d194-44c9-91ae-0fee1b2ef5e1", 00:14:50.569 "strip_size_kb": 0, 00:14:50.569 "state": "online", 00:14:50.569 "raid_level": "raid1", 00:14:50.569 "superblock": false, 00:14:50.569 "num_base_bdevs": 2, 00:14:50.569 "num_base_bdevs_discovered": 2, 00:14:50.569 "num_base_bdevs_operational": 2, 00:14:50.569 "process": { 00:14:50.569 "type": "rebuild", 00:14:50.569 "target": "spare", 00:14:50.569 "progress": { 00:14:50.569 "blocks": 47104, 00:14:50.569 "percent": 71 00:14:50.569 } 00:14:50.569 }, 00:14:50.569 "base_bdevs_list": [ 00:14:50.569 { 00:14:50.569 "name": "spare", 00:14:50.569 "uuid": "abf4a5fb-0af4-5f25-91a3-65839f2a36f2", 00:14:50.569 "is_configured": true, 00:14:50.569 "data_offset": 0, 00:14:50.569 "data_size": 65536 00:14:50.569 }, 00:14:50.569 { 00:14:50.569 "name": "BaseBdev2", 00:14:50.569 "uuid": "b0ab18bd-980c-5ac7-887e-264f26b53ced", 00:14:50.569 "is_configured": true, 00:14:50.569 "data_offset": 0, 00:14:50.569 "data_size": 65536 00:14:50.569 } 00:14:50.569 ] 00:14:50.569 }' 00:14:50.569 14:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.569 14:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.569 14:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.569 14:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.569 14:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:50.828 [2024-11-20 14:25:29.579162] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:51.086 [2024-11-20 14:25:29.809082] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:51.603 96.43 IOPS, 289.29 MiB/s [2024-11-20T14:25:30.585Z] 14:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:51.603 14:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.603 14:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.603 14:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.603 14:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.603 14:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.603 14:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.603 14:25:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.603 14:25:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.603 14:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.603 14:25:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.603 [2024-11-20 14:25:30.481381] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:51.603 14:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.603 "name": "raid_bdev1", 00:14:51.603 "uuid": "9c98992c-d194-44c9-91ae-0fee1b2ef5e1", 00:14:51.603 "strip_size_kb": 0, 00:14:51.603 "state": "online", 00:14:51.603 "raid_level": "raid1", 00:14:51.603 "superblock": false, 00:14:51.603 "num_base_bdevs": 2, 00:14:51.603 "num_base_bdevs_discovered": 2, 00:14:51.603 "num_base_bdevs_operational": 2, 00:14:51.603 "process": { 00:14:51.603 "type": "rebuild", 00:14:51.603 "target": "spare", 00:14:51.603 "progress": { 00:14:51.603 "blocks": 63488, 00:14:51.604 "percent": 96 00:14:51.604 } 00:14:51.604 }, 00:14:51.604 "base_bdevs_list": [ 00:14:51.604 { 00:14:51.604 "name": "spare", 00:14:51.604 "uuid": "abf4a5fb-0af4-5f25-91a3-65839f2a36f2", 00:14:51.604 "is_configured": true, 00:14:51.604 "data_offset": 0, 00:14:51.604 "data_size": 65536 00:14:51.604 }, 00:14:51.604 { 00:14:51.604 "name": "BaseBdev2", 00:14:51.604 "uuid": "b0ab18bd-980c-5ac7-887e-264f26b53ced", 00:14:51.604 "is_configured": true, 00:14:51.604 "data_offset": 0, 00:14:51.604 "data_size": 65536 00:14:51.604 } 00:14:51.604 ] 00:14:51.604 }' 00:14:51.604 14:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.604 14:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.604 14:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.862 14:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.862 14:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:51.862 [2024-11-20 14:25:30.589608] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:51.862 [2024-11-20 14:25:30.592204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.689 88.25 IOPS, 264.75 MiB/s [2024-11-20T14:25:31.671Z] 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:52.689 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.689 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.689 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.689 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.689 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.689 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.689 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.689 14:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.689 14:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.689 14:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.689 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.689 "name": "raid_bdev1", 00:14:52.689 "uuid": "9c98992c-d194-44c9-91ae-0fee1b2ef5e1", 00:14:52.689 "strip_size_kb": 0, 00:14:52.689 "state": "online", 00:14:52.689 "raid_level": "raid1", 00:14:52.689 "superblock": false, 00:14:52.689 "num_base_bdevs": 2, 00:14:52.689 "num_base_bdevs_discovered": 2, 00:14:52.689 "num_base_bdevs_operational": 2, 00:14:52.689 "base_bdevs_list": [ 00:14:52.689 { 00:14:52.689 "name": "spare", 00:14:52.689 "uuid": "abf4a5fb-0af4-5f25-91a3-65839f2a36f2", 00:14:52.689 "is_configured": true, 00:14:52.689 "data_offset": 0, 00:14:52.689 "data_size": 65536 00:14:52.689 }, 00:14:52.689 { 00:14:52.689 "name": "BaseBdev2", 00:14:52.689 "uuid": "b0ab18bd-980c-5ac7-887e-264f26b53ced", 00:14:52.689 "is_configured": true, 00:14:52.689 "data_offset": 0, 00:14:52.689 "data_size": 65536 00:14:52.689 } 00:14:52.689 ] 00:14:52.689 }' 00:14:52.689 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.948 "name": "raid_bdev1", 00:14:52.948 "uuid": "9c98992c-d194-44c9-91ae-0fee1b2ef5e1", 00:14:52.948 "strip_size_kb": 0, 00:14:52.948 "state": "online", 00:14:52.948 "raid_level": "raid1", 00:14:52.948 "superblock": false, 00:14:52.948 "num_base_bdevs": 2, 00:14:52.948 "num_base_bdevs_discovered": 2, 00:14:52.948 "num_base_bdevs_operational": 2, 00:14:52.948 "base_bdevs_list": [ 00:14:52.948 { 00:14:52.948 "name": "spare", 00:14:52.948 "uuid": "abf4a5fb-0af4-5f25-91a3-65839f2a36f2", 00:14:52.948 "is_configured": true, 00:14:52.948 "data_offset": 0, 00:14:52.948 "data_size": 65536 00:14:52.948 }, 00:14:52.948 { 00:14:52.948 "name": "BaseBdev2", 00:14:52.948 "uuid": "b0ab18bd-980c-5ac7-887e-264f26b53ced", 00:14:52.948 "is_configured": true, 00:14:52.948 "data_offset": 0, 00:14:52.948 "data_size": 65536 00:14:52.948 } 00:14:52.948 ] 00:14:52.948 }' 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.948 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.206 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.206 14:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.206 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.206 14:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.206 14:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.206 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.206 "name": "raid_bdev1", 00:14:53.206 "uuid": "9c98992c-d194-44c9-91ae-0fee1b2ef5e1", 00:14:53.206 "strip_size_kb": 0, 00:14:53.206 "state": "online", 00:14:53.206 "raid_level": "raid1", 00:14:53.206 "superblock": false, 00:14:53.206 "num_base_bdevs": 2, 00:14:53.206 "num_base_bdevs_discovered": 2, 00:14:53.206 "num_base_bdevs_operational": 2, 00:14:53.206 "base_bdevs_list": [ 00:14:53.206 { 00:14:53.206 "name": "spare", 00:14:53.206 "uuid": "abf4a5fb-0af4-5f25-91a3-65839f2a36f2", 00:14:53.206 "is_configured": true, 00:14:53.206 "data_offset": 0, 00:14:53.206 "data_size": 65536 00:14:53.206 }, 00:14:53.206 { 00:14:53.206 "name": "BaseBdev2", 00:14:53.206 "uuid": "b0ab18bd-980c-5ac7-887e-264f26b53ced", 00:14:53.206 "is_configured": true, 00:14:53.206 "data_offset": 0, 00:14:53.206 "data_size": 65536 00:14:53.206 } 00:14:53.206 ] 00:14:53.206 }' 00:14:53.206 14:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.206 14:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.464 82.00 IOPS, 246.00 MiB/s [2024-11-20T14:25:32.446Z] 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:53.465 14:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.465 14:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.723 [2024-11-20 14:25:32.446592] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:53.723 [2024-11-20 14:25:32.446630] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:53.723 00:14:53.723 Latency(us) 00:14:53.723 [2024-11-20T14:25:32.705Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.723 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:53.723 raid_bdev1 : 9.20 80.87 242.61 0.00 0.00 17349.41 288.58 122016.12 00:14:53.723 [2024-11-20T14:25:32.705Z] =================================================================================================================== 00:14:53.723 [2024-11-20T14:25:32.705Z] Total : 80.87 242.61 0.00 0.00 17349.41 288.58 122016.12 00:14:53.723 [2024-11-20 14:25:32.502791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.723 [2024-11-20 14:25:32.502852] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.723 [2024-11-20 14:25:32.502957] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:53.723 [2024-11-20 14:25:32.502976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:53.723 { 00:14:53.723 "results": [ 00:14:53.723 { 00:14:53.723 "job": "raid_bdev1", 00:14:53.723 "core_mask": "0x1", 00:14:53.723 "workload": "randrw", 00:14:53.723 "percentage": 50, 00:14:53.723 "status": "finished", 00:14:53.723 "queue_depth": 2, 00:14:53.723 "io_size": 3145728, 00:14:53.723 "runtime": 9.19988, 00:14:53.723 "iops": 80.8706200515659, 00:14:53.723 "mibps": 242.6118601546977, 00:14:53.723 "io_failed": 0, 00:14:53.723 "io_timeout": 0, 00:14:53.723 "avg_latency_us": 17349.412785923752, 00:14:53.723 "min_latency_us": 288.58181818181816, 00:14:53.723 "max_latency_us": 122016.11636363636 00:14:53.723 } 00:14:53.723 ], 00:14:53.723 "core_count": 1 00:14:53.723 } 00:14:53.723 14:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.723 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.723 14:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.723 14:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.723 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:53.723 14:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.723 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:53.723 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:53.723 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:53.723 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:53.723 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:53.723 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:53.723 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:53.723 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:53.723 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:53.723 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:53.723 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:53.723 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:53.723 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:53.989 /dev/nbd0 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:53.989 1+0 records in 00:14:53.989 1+0 records out 00:14:53.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031727 s, 12.9 MB/s 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:53.989 14:25:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:54.248 /dev/nbd1 00:14:54.248 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:54.248 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:54.248 14:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:54.248 14:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:54.248 14:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:54.248 14:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:54.248 14:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:54.248 14:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:54.248 14:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:54.248 14:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:54.248 14:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:54.248 1+0 records in 00:14:54.248 1+0 records out 00:14:54.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395511 s, 10.4 MB/s 00:14:54.248 14:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.248 14:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:54.248 14:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.248 14:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:54.248 14:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:54.248 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:54.248 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:54.248 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:54.507 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:54.507 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:54.507 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:54.507 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:54.507 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:54.507 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:54.507 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:55.074 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:55.074 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:55.074 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:55.074 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:55.074 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:55.074 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:55.074 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:55.074 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:55.074 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:55.074 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:55.074 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:55.074 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:55.074 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:55.074 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:55.074 14:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:55.334 14:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:55.334 14:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:55.334 14:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:55.334 14:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:55.334 14:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:55.334 14:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:55.334 14:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:55.334 14:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:55.334 14:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:55.334 14:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76700 00:14:55.334 14:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76700 ']' 00:14:55.334 14:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76700 00:14:55.334 14:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:55.334 14:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:55.334 14:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76700 00:14:55.334 killing process with pid 76700 00:14:55.334 Received shutdown signal, test time was about 10.838127 seconds 00:14:55.334 00:14:55.334 Latency(us) 00:14:55.334 [2024-11-20T14:25:34.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.334 [2024-11-20T14:25:34.316Z] =================================================================================================================== 00:14:55.334 [2024-11-20T14:25:34.316Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:55.334 14:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:55.334 14:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:55.334 14:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76700' 00:14:55.334 14:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76700 00:14:55.334 [2024-11-20 14:25:34.120777] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:55.334 14:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76700 00:14:55.593 [2024-11-20 14:25:34.332955] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:56.529 ************************************ 00:14:56.529 END TEST raid_rebuild_test_io 00:14:56.529 ************************************ 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:56.529 00:14:56.529 real 0m14.068s 00:14:56.529 user 0m18.211s 00:14:56.529 sys 0m1.437s 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.529 14:25:35 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:14:56.529 14:25:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:56.529 14:25:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.529 14:25:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:56.529 ************************************ 00:14:56.529 START TEST raid_rebuild_test_sb_io 00:14:56.529 ************************************ 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77097 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77097 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77097 ']' 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.529 14:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.788 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:56.788 Zero copy mechanism will not be used. 00:14:56.788 [2024-11-20 14:25:35.590475] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:14:56.788 [2024-11-20 14:25:35.590651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77097 ] 00:14:57.047 [2024-11-20 14:25:35.778389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.047 [2024-11-20 14:25:35.929369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.305 [2024-11-20 14:25:36.133649] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.305 [2024-11-20 14:25:36.133718] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.874 BaseBdev1_malloc 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.874 [2024-11-20 14:25:36.606126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:57.874 [2024-11-20 14:25:36.606204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.874 [2024-11-20 14:25:36.606235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:57.874 [2024-11-20 14:25:36.606253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.874 [2024-11-20 14:25:36.609366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.874 [2024-11-20 14:25:36.609413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:57.874 BaseBdev1 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.874 BaseBdev2_malloc 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.874 [2024-11-20 14:25:36.654612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:57.874 [2024-11-20 14:25:36.654688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.874 [2024-11-20 14:25:36.654720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:57.874 [2024-11-20 14:25:36.654737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.874 [2024-11-20 14:25:36.657509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.874 [2024-11-20 14:25:36.657558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:57.874 BaseBdev2 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.874 spare_malloc 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.874 spare_delay 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.874 [2024-11-20 14:25:36.726442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:57.874 [2024-11-20 14:25:36.726516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.874 [2024-11-20 14:25:36.726546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:57.874 [2024-11-20 14:25:36.726564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.874 [2024-11-20 14:25:36.729394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.874 [2024-11-20 14:25:36.729442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:57.874 spare 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.874 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.874 [2024-11-20 14:25:36.734514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:57.874 [2024-11-20 14:25:36.736925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.874 [2024-11-20 14:25:36.737187] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:57.874 [2024-11-20 14:25:36.737217] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:57.874 [2024-11-20 14:25:36.737524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:57.875 [2024-11-20 14:25:36.737740] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:57.875 [2024-11-20 14:25:36.737766] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:57.875 [2024-11-20 14:25:36.737951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.875 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.875 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:57.875 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.875 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.875 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.875 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.875 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:57.875 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.875 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.875 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.875 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.875 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.875 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.875 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.875 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.875 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.875 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.875 "name": "raid_bdev1", 00:14:57.875 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:14:57.875 "strip_size_kb": 0, 00:14:57.875 "state": "online", 00:14:57.875 "raid_level": "raid1", 00:14:57.875 "superblock": true, 00:14:57.875 "num_base_bdevs": 2, 00:14:57.875 "num_base_bdevs_discovered": 2, 00:14:57.875 "num_base_bdevs_operational": 2, 00:14:57.875 "base_bdevs_list": [ 00:14:57.875 { 00:14:57.875 "name": "BaseBdev1", 00:14:57.875 "uuid": "54dd9789-fc6c-57b7-829b-322ac0c25f23", 00:14:57.875 "is_configured": true, 00:14:57.875 "data_offset": 2048, 00:14:57.875 "data_size": 63488 00:14:57.875 }, 00:14:57.875 { 00:14:57.875 "name": "BaseBdev2", 00:14:57.875 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:14:57.875 "is_configured": true, 00:14:57.875 "data_offset": 2048, 00:14:57.875 "data_size": 63488 00:14:57.875 } 00:14:57.875 ] 00:14:57.875 }' 00:14:57.875 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.875 14:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.444 [2024-11-20 14:25:37.271227] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.444 [2024-11-20 14:25:37.382879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.444 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.705 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.705 "name": "raid_bdev1", 00:14:58.705 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:14:58.705 "strip_size_kb": 0, 00:14:58.705 "state": "online", 00:14:58.705 "raid_level": "raid1", 00:14:58.705 "superblock": true, 00:14:58.705 "num_base_bdevs": 2, 00:14:58.705 "num_base_bdevs_discovered": 1, 00:14:58.705 "num_base_bdevs_operational": 1, 00:14:58.705 "base_bdevs_list": [ 00:14:58.705 { 00:14:58.705 "name": null, 00:14:58.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.705 "is_configured": false, 00:14:58.705 "data_offset": 0, 00:14:58.705 "data_size": 63488 00:14:58.705 }, 00:14:58.705 { 00:14:58.705 "name": "BaseBdev2", 00:14:58.705 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:14:58.705 "is_configured": true, 00:14:58.705 "data_offset": 2048, 00:14:58.705 "data_size": 63488 00:14:58.705 } 00:14:58.705 ] 00:14:58.705 }' 00:14:58.705 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.705 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.705 [2024-11-20 14:25:37.495176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:58.705 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:58.705 Zero copy mechanism will not be used. 00:14:58.705 Running I/O for 60 seconds... 00:14:58.964 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:58.964 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.964 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.964 [2024-11-20 14:25:37.941930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:59.223 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.223 14:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:59.223 [2024-11-20 14:25:38.006625] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:59.223 [2024-11-20 14:25:38.009259] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:59.223 [2024-11-20 14:25:38.119663] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:59.223 [2024-11-20 14:25:38.120397] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:59.482 [2024-11-20 14:25:38.340299] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:59.482 [2024-11-20 14:25:38.340689] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:59.742 141.00 IOPS, 423.00 MiB/s [2024-11-20T14:25:38.724Z] [2024-11-20 14:25:38.672708] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:59.742 [2024-11-20 14:25:38.673298] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:00.001 [2024-11-20 14:25:38.893870] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:00.260 14:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.260 14:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.260 14:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.260 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.260 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.260 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.260 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.260 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.260 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.260 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.260 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.260 "name": "raid_bdev1", 00:15:00.260 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:15:00.260 "strip_size_kb": 0, 00:15:00.260 "state": "online", 00:15:00.260 "raid_level": "raid1", 00:15:00.260 "superblock": true, 00:15:00.260 "num_base_bdevs": 2, 00:15:00.260 "num_base_bdevs_discovered": 2, 00:15:00.260 "num_base_bdevs_operational": 2, 00:15:00.260 "process": { 00:15:00.260 "type": "rebuild", 00:15:00.260 "target": "spare", 00:15:00.260 "progress": { 00:15:00.260 "blocks": 10240, 00:15:00.260 "percent": 16 00:15:00.260 } 00:15:00.260 }, 00:15:00.260 "base_bdevs_list": [ 00:15:00.260 { 00:15:00.260 "name": "spare", 00:15:00.260 "uuid": "1a502574-3580-5aa9-bf64-fe3b04e1c091", 00:15:00.260 "is_configured": true, 00:15:00.260 "data_offset": 2048, 00:15:00.260 "data_size": 63488 00:15:00.260 }, 00:15:00.260 { 00:15:00.260 "name": "BaseBdev2", 00:15:00.260 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:15:00.260 "is_configured": true, 00:15:00.260 "data_offset": 2048, 00:15:00.260 "data_size": 63488 00:15:00.260 } 00:15:00.260 ] 00:15:00.260 }' 00:15:00.260 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.260 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.260 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.260 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.260 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:00.260 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.260 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.260 [2024-11-20 14:25:39.164718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:00.260 [2024-11-20 14:25:39.232953] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:00.521 [2024-11-20 14:25:39.274705] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:00.521 [2024-11-20 14:25:39.294655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.521 [2024-11-20 14:25:39.294763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:00.521 [2024-11-20 14:25:39.294786] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:00.521 [2024-11-20 14:25:39.338831] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:15:00.521 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.521 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:00.521 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.521 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.521 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.521 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.521 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:00.521 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.521 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.521 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.521 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.521 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.521 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.521 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.521 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.521 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.521 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.521 "name": "raid_bdev1", 00:15:00.521 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:15:00.521 "strip_size_kb": 0, 00:15:00.521 "state": "online", 00:15:00.521 "raid_level": "raid1", 00:15:00.521 "superblock": true, 00:15:00.521 "num_base_bdevs": 2, 00:15:00.521 "num_base_bdevs_discovered": 1, 00:15:00.521 "num_base_bdevs_operational": 1, 00:15:00.521 "base_bdevs_list": [ 00:15:00.521 { 00:15:00.521 "name": null, 00:15:00.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.521 "is_configured": false, 00:15:00.521 "data_offset": 0, 00:15:00.521 "data_size": 63488 00:15:00.521 }, 00:15:00.521 { 00:15:00.521 "name": "BaseBdev2", 00:15:00.521 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:15:00.521 "is_configured": true, 00:15:00.521 "data_offset": 2048, 00:15:00.521 "data_size": 63488 00:15:00.521 } 00:15:00.521 ] 00:15:00.521 }' 00:15:00.521 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.521 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.040 105.50 IOPS, 316.50 MiB/s [2024-11-20T14:25:40.022Z] 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:01.040 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.040 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:01.040 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:01.040 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.040 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.040 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.040 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.040 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.040 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.040 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.040 "name": "raid_bdev1", 00:15:01.040 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:15:01.040 "strip_size_kb": 0, 00:15:01.040 "state": "online", 00:15:01.041 "raid_level": "raid1", 00:15:01.041 "superblock": true, 00:15:01.041 "num_base_bdevs": 2, 00:15:01.041 "num_base_bdevs_discovered": 1, 00:15:01.041 "num_base_bdevs_operational": 1, 00:15:01.041 "base_bdevs_list": [ 00:15:01.041 { 00:15:01.041 "name": null, 00:15:01.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.041 "is_configured": false, 00:15:01.041 "data_offset": 0, 00:15:01.041 "data_size": 63488 00:15:01.041 }, 00:15:01.041 { 00:15:01.041 "name": "BaseBdev2", 00:15:01.041 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:15:01.041 "is_configured": true, 00:15:01.041 "data_offset": 2048, 00:15:01.041 "data_size": 63488 00:15:01.041 } 00:15:01.041 ] 00:15:01.041 }' 00:15:01.041 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.041 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:01.041 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.041 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:01.041 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:01.041 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.041 14:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.041 [2024-11-20 14:25:40.007143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:01.299 14:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.299 14:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:01.299 [2024-11-20 14:25:40.070556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:01.299 [2024-11-20 14:25:40.073208] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:01.299 [2024-11-20 14:25:40.190722] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:01.572 [2024-11-20 14:25:40.317807] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:01.572 [2024-11-20 14:25:40.318219] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:01.860 129.67 IOPS, 389.00 MiB/s [2024-11-20T14:25:40.842Z] [2024-11-20 14:25:40.552282] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:02.118 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.118 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.118 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.118 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.118 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.118 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.118 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.118 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.118 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.118 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.376 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.376 "name": "raid_bdev1", 00:15:02.376 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:15:02.376 "strip_size_kb": 0, 00:15:02.376 "state": "online", 00:15:02.376 "raid_level": "raid1", 00:15:02.376 "superblock": true, 00:15:02.376 "num_base_bdevs": 2, 00:15:02.376 "num_base_bdevs_discovered": 2, 00:15:02.376 "num_base_bdevs_operational": 2, 00:15:02.376 "process": { 00:15:02.376 "type": "rebuild", 00:15:02.376 "target": "spare", 00:15:02.376 "progress": { 00:15:02.376 "blocks": 14336, 00:15:02.376 "percent": 22 00:15:02.376 } 00:15:02.376 }, 00:15:02.376 "base_bdevs_list": [ 00:15:02.376 { 00:15:02.376 "name": "spare", 00:15:02.376 "uuid": "1a502574-3580-5aa9-bf64-fe3b04e1c091", 00:15:02.376 "is_configured": true, 00:15:02.376 "data_offset": 2048, 00:15:02.376 "data_size": 63488 00:15:02.376 }, 00:15:02.376 { 00:15:02.376 "name": "BaseBdev2", 00:15:02.376 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:15:02.376 "is_configured": true, 00:15:02.376 "data_offset": 2048, 00:15:02.376 "data_size": 63488 00:15:02.376 } 00:15:02.376 ] 00:15:02.376 }' 00:15:02.376 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.376 [2024-11-20 14:25:41.137977] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:02.377 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.377 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.377 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.377 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:02.377 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:02.377 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:02.377 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:02.377 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:02.377 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:02.377 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=448 00:15:02.377 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.377 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.377 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.377 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.377 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.377 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.377 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.377 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.377 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.377 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.377 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.377 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.377 "name": "raid_bdev1", 00:15:02.377 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:15:02.377 "strip_size_kb": 0, 00:15:02.377 "state": "online", 00:15:02.377 "raid_level": "raid1", 00:15:02.377 "superblock": true, 00:15:02.377 "num_base_bdevs": 2, 00:15:02.377 "num_base_bdevs_discovered": 2, 00:15:02.377 "num_base_bdevs_operational": 2, 00:15:02.377 "process": { 00:15:02.377 "type": "rebuild", 00:15:02.377 "target": "spare", 00:15:02.377 "progress": { 00:15:02.377 "blocks": 16384, 00:15:02.377 "percent": 25 00:15:02.377 } 00:15:02.377 }, 00:15:02.377 "base_bdevs_list": [ 00:15:02.377 { 00:15:02.377 "name": "spare", 00:15:02.377 "uuid": "1a502574-3580-5aa9-bf64-fe3b04e1c091", 00:15:02.377 "is_configured": true, 00:15:02.377 "data_offset": 2048, 00:15:02.377 "data_size": 63488 00:15:02.377 }, 00:15:02.377 { 00:15:02.377 "name": "BaseBdev2", 00:15:02.377 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:15:02.377 "is_configured": true, 00:15:02.377 "data_offset": 2048, 00:15:02.377 "data_size": 63488 00:15:02.377 } 00:15:02.377 ] 00:15:02.377 }' 00:15:02.377 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.377 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.377 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.635 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.635 14:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:02.635 122.00 IOPS, 366.00 MiB/s [2024-11-20T14:25:41.617Z] [2024-11-20 14:25:41.507127] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:02.635 [2024-11-20 14:25:41.507798] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:02.894 [2024-11-20 14:25:41.654375] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:03.153 [2024-11-20 14:25:42.034878] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:03.411 [2024-11-20 14:25:42.280895] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:03.411 14:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:03.411 14:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.411 14:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.411 14:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.411 14:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.411 14:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.411 14:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.411 14:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.412 14:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.412 14:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.670 14:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.670 14:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.670 "name": "raid_bdev1", 00:15:03.670 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:15:03.670 "strip_size_kb": 0, 00:15:03.671 "state": "online", 00:15:03.671 "raid_level": "raid1", 00:15:03.671 "superblock": true, 00:15:03.671 "num_base_bdevs": 2, 00:15:03.671 "num_base_bdevs_discovered": 2, 00:15:03.671 "num_base_bdevs_operational": 2, 00:15:03.671 "process": { 00:15:03.671 "type": "rebuild", 00:15:03.671 "target": "spare", 00:15:03.671 "progress": { 00:15:03.671 "blocks": 28672, 00:15:03.671 "percent": 45 00:15:03.671 } 00:15:03.671 }, 00:15:03.671 "base_bdevs_list": [ 00:15:03.671 { 00:15:03.671 "name": "spare", 00:15:03.671 "uuid": "1a502574-3580-5aa9-bf64-fe3b04e1c091", 00:15:03.671 "is_configured": true, 00:15:03.671 "data_offset": 2048, 00:15:03.671 "data_size": 63488 00:15:03.671 }, 00:15:03.671 { 00:15:03.671 "name": "BaseBdev2", 00:15:03.671 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:15:03.671 "is_configured": true, 00:15:03.671 "data_offset": 2048, 00:15:03.671 "data_size": 63488 00:15:03.671 } 00:15:03.671 ] 00:15:03.671 }' 00:15:03.671 14:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.671 14:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.671 14:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.671 108.40 IOPS, 325.20 MiB/s [2024-11-20T14:25:42.653Z] 14:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.671 14:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:03.671 [2024-11-20 14:25:42.594639] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:03.671 [2024-11-20 14:25:42.595375] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:03.930 [2024-11-20 14:25:42.806787] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:03.930 [2024-11-20 14:25:42.807220] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:04.188 [2024-11-20 14:25:43.147757] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:04.446 [2024-11-20 14:25:43.377889] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:04.704 96.00 IOPS, 288.00 MiB/s [2024-11-20T14:25:43.686Z] 14:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:04.704 14:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.704 14:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.704 14:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.704 14:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.704 14:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.704 14:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.704 14:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.704 14:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.704 14:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.704 14:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.704 14:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.704 "name": "raid_bdev1", 00:15:04.704 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:15:04.704 "strip_size_kb": 0, 00:15:04.704 "state": "online", 00:15:04.704 "raid_level": "raid1", 00:15:04.704 "superblock": true, 00:15:04.704 "num_base_bdevs": 2, 00:15:04.704 "num_base_bdevs_discovered": 2, 00:15:04.704 "num_base_bdevs_operational": 2, 00:15:04.704 "process": { 00:15:04.704 "type": "rebuild", 00:15:04.704 "target": "spare", 00:15:04.704 "progress": { 00:15:04.704 "blocks": 43008, 00:15:04.704 "percent": 67 00:15:04.704 } 00:15:04.704 }, 00:15:04.704 "base_bdevs_list": [ 00:15:04.704 { 00:15:04.704 "name": "spare", 00:15:04.704 "uuid": "1a502574-3580-5aa9-bf64-fe3b04e1c091", 00:15:04.704 "is_configured": true, 00:15:04.704 "data_offset": 2048, 00:15:04.704 "data_size": 63488 00:15:04.704 }, 00:15:04.704 { 00:15:04.704 "name": "BaseBdev2", 00:15:04.704 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:15:04.704 "is_configured": true, 00:15:04.704 "data_offset": 2048, 00:15:04.704 "data_size": 63488 00:15:04.704 } 00:15:04.704 ] 00:15:04.704 }' 00:15:04.704 14:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.704 14:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.704 14:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.961 14:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.961 14:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:05.575 [2024-11-20 14:25:44.429345] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:05.834 87.43 IOPS, 262.29 MiB/s [2024-11-20T14:25:44.816Z] 14:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.834 14:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.834 14:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.834 14:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.834 14:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.834 14:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.834 14:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.834 14:25:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.834 14:25:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.834 14:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.834 14:25:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.834 14:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.834 "name": "raid_bdev1", 00:15:05.834 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:15:05.834 "strip_size_kb": 0, 00:15:05.834 "state": "online", 00:15:05.834 "raid_level": "raid1", 00:15:05.834 "superblock": true, 00:15:05.834 "num_base_bdevs": 2, 00:15:05.834 "num_base_bdevs_discovered": 2, 00:15:05.834 "num_base_bdevs_operational": 2, 00:15:05.834 "process": { 00:15:05.834 "type": "rebuild", 00:15:05.834 "target": "spare", 00:15:05.834 "progress": { 00:15:05.834 "blocks": 61440, 00:15:05.834 "percent": 96 00:15:05.834 } 00:15:05.834 }, 00:15:05.834 "base_bdevs_list": [ 00:15:05.834 { 00:15:05.834 "name": "spare", 00:15:05.834 "uuid": "1a502574-3580-5aa9-bf64-fe3b04e1c091", 00:15:05.834 "is_configured": true, 00:15:05.834 "data_offset": 2048, 00:15:05.834 "data_size": 63488 00:15:05.834 }, 00:15:05.834 { 00:15:05.834 "name": "BaseBdev2", 00:15:05.834 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:15:05.834 "is_configured": true, 00:15:05.834 "data_offset": 2048, 00:15:05.834 "data_size": 63488 00:15:05.834 } 00:15:05.834 ] 00:15:05.834 }' 00:15:05.834 14:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.834 [2024-11-20 14:25:44.771565] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:05.834 14:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.834 14:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.091 14:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.091 14:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:06.091 [2024-11-20 14:25:44.871632] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:06.091 [2024-11-20 14:25:44.874185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.916 80.50 IOPS, 241.50 MiB/s [2024-11-20T14:25:45.898Z] 14:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.916 14:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.916 14:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.916 14:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.916 14:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.916 14:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.916 14:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.916 14:25:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.916 14:25:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.916 14:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.916 14:25:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.174 14:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.174 "name": "raid_bdev1", 00:15:07.174 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:15:07.174 "strip_size_kb": 0, 00:15:07.174 "state": "online", 00:15:07.174 "raid_level": "raid1", 00:15:07.174 "superblock": true, 00:15:07.174 "num_base_bdevs": 2, 00:15:07.174 "num_base_bdevs_discovered": 2, 00:15:07.174 "num_base_bdevs_operational": 2, 00:15:07.174 "base_bdevs_list": [ 00:15:07.174 { 00:15:07.174 "name": "spare", 00:15:07.174 "uuid": "1a502574-3580-5aa9-bf64-fe3b04e1c091", 00:15:07.174 "is_configured": true, 00:15:07.174 "data_offset": 2048, 00:15:07.174 "data_size": 63488 00:15:07.174 }, 00:15:07.174 { 00:15:07.174 "name": "BaseBdev2", 00:15:07.174 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:15:07.174 "is_configured": true, 00:15:07.174 "data_offset": 2048, 00:15:07.174 "data_size": 63488 00:15:07.174 } 00:15:07.174 ] 00:15:07.174 }' 00:15:07.174 14:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.174 14:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:07.174 14:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.174 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:07.174 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:07.174 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:07.174 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.174 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:07.174 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:07.174 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.174 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.174 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.174 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.174 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.174 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.174 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.174 "name": "raid_bdev1", 00:15:07.174 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:15:07.174 "strip_size_kb": 0, 00:15:07.174 "state": "online", 00:15:07.174 "raid_level": "raid1", 00:15:07.174 "superblock": true, 00:15:07.174 "num_base_bdevs": 2, 00:15:07.174 "num_base_bdevs_discovered": 2, 00:15:07.174 "num_base_bdevs_operational": 2, 00:15:07.174 "base_bdevs_list": [ 00:15:07.174 { 00:15:07.174 "name": "spare", 00:15:07.174 "uuid": "1a502574-3580-5aa9-bf64-fe3b04e1c091", 00:15:07.174 "is_configured": true, 00:15:07.174 "data_offset": 2048, 00:15:07.174 "data_size": 63488 00:15:07.174 }, 00:15:07.174 { 00:15:07.174 "name": "BaseBdev2", 00:15:07.174 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:15:07.174 "is_configured": true, 00:15:07.174 "data_offset": 2048, 00:15:07.174 "data_size": 63488 00:15:07.174 } 00:15:07.174 ] 00:15:07.174 }' 00:15:07.174 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.174 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:07.174 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.431 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:07.431 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:07.431 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.431 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.431 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.431 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.431 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:07.431 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.431 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.431 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.431 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.431 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.431 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.431 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.431 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.431 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.431 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.431 "name": "raid_bdev1", 00:15:07.431 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:15:07.431 "strip_size_kb": 0, 00:15:07.431 "state": "online", 00:15:07.431 "raid_level": "raid1", 00:15:07.431 "superblock": true, 00:15:07.431 "num_base_bdevs": 2, 00:15:07.431 "num_base_bdevs_discovered": 2, 00:15:07.431 "num_base_bdevs_operational": 2, 00:15:07.431 "base_bdevs_list": [ 00:15:07.431 { 00:15:07.431 "name": "spare", 00:15:07.431 "uuid": "1a502574-3580-5aa9-bf64-fe3b04e1c091", 00:15:07.431 "is_configured": true, 00:15:07.431 "data_offset": 2048, 00:15:07.431 "data_size": 63488 00:15:07.431 }, 00:15:07.431 { 00:15:07.431 "name": "BaseBdev2", 00:15:07.431 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:15:07.431 "is_configured": true, 00:15:07.431 "data_offset": 2048, 00:15:07.431 "data_size": 63488 00:15:07.431 } 00:15:07.431 ] 00:15:07.431 }' 00:15:07.431 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.432 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.947 75.89 IOPS, 227.67 MiB/s [2024-11-20T14:25:46.929Z] 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:07.947 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.947 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.947 [2024-11-20 14:25:46.698488] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:07.947 [2024-11-20 14:25:46.698546] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:07.947 00:15:07.947 Latency(us) 00:15:07.947 [2024-11-20T14:25:46.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.947 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:07.947 raid_bdev1 : 9.28 74.43 223.28 0.00 0.00 18516.82 309.06 129642.12 00:15:07.947 [2024-11-20T14:25:46.929Z] =================================================================================================================== 00:15:07.947 [2024-11-20T14:25:46.929Z] Total : 74.43 223.28 0.00 0.00 18516.82 309.06 129642.12 00:15:07.947 [2024-11-20 14:25:46.802966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.947 [2024-11-20 14:25:46.803089] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.947 [2024-11-20 14:25:46.803213] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.947 [2024-11-20 14:25:46.803236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:07.947 { 00:15:07.947 "results": [ 00:15:07.947 { 00:15:07.947 "job": "raid_bdev1", 00:15:07.947 "core_mask": "0x1", 00:15:07.947 "workload": "randrw", 00:15:07.947 "percentage": 50, 00:15:07.947 "status": "finished", 00:15:07.947 "queue_depth": 2, 00:15:07.947 "io_size": 3145728, 00:15:07.947 "runtime": 9.284407, 00:15:07.947 "iops": 74.4258626318299, 00:15:07.947 "mibps": 223.27758789548972, 00:15:07.947 "io_failed": 0, 00:15:07.947 "io_timeout": 0, 00:15:07.947 "avg_latency_us": 18516.824544138926, 00:15:07.947 "min_latency_us": 309.0618181818182, 00:15:07.947 "max_latency_us": 129642.12363636364 00:15:07.947 } 00:15:07.947 ], 00:15:07.947 "core_count": 1 00:15:07.947 } 00:15:07.947 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.947 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.947 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:07.947 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.947 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.947 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.947 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:07.947 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:07.947 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:07.947 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:07.947 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.947 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:07.947 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:07.947 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:07.947 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:07.947 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:07.947 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:07.947 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:07.947 14:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:08.205 /dev/nbd0 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:08.205 1+0 records in 00:15:08.205 1+0 records out 00:15:08.205 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435646 s, 9.4 MB/s 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:08.205 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:15:08.463 /dev/nbd1 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:08.721 1+0 records in 00:15:08.721 1+0 records out 00:15:08.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346182 s, 11.8 MB/s 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:08.721 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:08.979 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:08.979 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:08.979 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:08.979 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:08.979 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:08.979 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:08.979 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:08.979 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:08.979 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:08.979 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:08.979 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:08.979 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:08.979 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:08.979 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:08.979 14:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:09.237 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:09.237 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:09.237 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:09.237 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.237 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.237 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:09.237 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:09.237 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.237 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:09.237 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:09.237 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.237 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.237 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.237 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:09.237 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.237 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.237 [2024-11-20 14:25:48.193762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:09.237 [2024-11-20 14:25:48.193853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.237 [2024-11-20 14:25:48.193892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:09.237 [2024-11-20 14:25:48.193910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.237 [2024-11-20 14:25:48.196898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.237 [2024-11-20 14:25:48.196955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:09.237 [2024-11-20 14:25:48.197113] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:09.237 [2024-11-20 14:25:48.197185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:09.237 [2024-11-20 14:25:48.197379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:09.237 spare 00:15:09.237 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.237 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:09.237 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.237 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.495 [2024-11-20 14:25:48.297548] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:09.495 [2024-11-20 14:25:48.297614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:09.495 [2024-11-20 14:25:48.298095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:15:09.495 [2024-11-20 14:25:48.298370] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:09.495 [2024-11-20 14:25:48.298403] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:09.495 [2024-11-20 14:25:48.298659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.495 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.495 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:09.495 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.495 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.495 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.495 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.495 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:09.495 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.495 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.495 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.495 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.495 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.495 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.495 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.495 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.495 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.495 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.495 "name": "raid_bdev1", 00:15:09.495 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:15:09.495 "strip_size_kb": 0, 00:15:09.495 "state": "online", 00:15:09.495 "raid_level": "raid1", 00:15:09.495 "superblock": true, 00:15:09.495 "num_base_bdevs": 2, 00:15:09.495 "num_base_bdevs_discovered": 2, 00:15:09.495 "num_base_bdevs_operational": 2, 00:15:09.495 "base_bdevs_list": [ 00:15:09.495 { 00:15:09.495 "name": "spare", 00:15:09.495 "uuid": "1a502574-3580-5aa9-bf64-fe3b04e1c091", 00:15:09.495 "is_configured": true, 00:15:09.495 "data_offset": 2048, 00:15:09.495 "data_size": 63488 00:15:09.495 }, 00:15:09.495 { 00:15:09.495 "name": "BaseBdev2", 00:15:09.495 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:15:09.495 "is_configured": true, 00:15:09.495 "data_offset": 2048, 00:15:09.495 "data_size": 63488 00:15:09.495 } 00:15:09.495 ] 00:15:09.495 }' 00:15:09.495 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.495 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.063 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:10.063 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.063 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:10.063 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:10.063 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.063 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.063 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.063 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.063 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.063 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.063 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.063 "name": "raid_bdev1", 00:15:10.063 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:15:10.063 "strip_size_kb": 0, 00:15:10.063 "state": "online", 00:15:10.063 "raid_level": "raid1", 00:15:10.063 "superblock": true, 00:15:10.063 "num_base_bdevs": 2, 00:15:10.063 "num_base_bdevs_discovered": 2, 00:15:10.063 "num_base_bdevs_operational": 2, 00:15:10.063 "base_bdevs_list": [ 00:15:10.063 { 00:15:10.063 "name": "spare", 00:15:10.063 "uuid": "1a502574-3580-5aa9-bf64-fe3b04e1c091", 00:15:10.063 "is_configured": true, 00:15:10.063 "data_offset": 2048, 00:15:10.063 "data_size": 63488 00:15:10.063 }, 00:15:10.063 { 00:15:10.063 "name": "BaseBdev2", 00:15:10.063 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:15:10.063 "is_configured": true, 00:15:10.063 "data_offset": 2048, 00:15:10.063 "data_size": 63488 00:15:10.063 } 00:15:10.063 ] 00:15:10.063 }' 00:15:10.063 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.063 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:10.063 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.063 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:10.063 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.063 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.063 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.063 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:10.063 14:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.063 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.063 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:10.063 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.063 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.063 [2024-11-20 14:25:49.030924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:10.063 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.063 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:10.063 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.063 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.063 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.063 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.063 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:10.063 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.063 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.063 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.063 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.063 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.063 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.063 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.063 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.321 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.321 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.321 "name": "raid_bdev1", 00:15:10.321 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:15:10.321 "strip_size_kb": 0, 00:15:10.321 "state": "online", 00:15:10.321 "raid_level": "raid1", 00:15:10.321 "superblock": true, 00:15:10.321 "num_base_bdevs": 2, 00:15:10.321 "num_base_bdevs_discovered": 1, 00:15:10.322 "num_base_bdevs_operational": 1, 00:15:10.322 "base_bdevs_list": [ 00:15:10.322 { 00:15:10.322 "name": null, 00:15:10.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.322 "is_configured": false, 00:15:10.322 "data_offset": 0, 00:15:10.322 "data_size": 63488 00:15:10.322 }, 00:15:10.322 { 00:15:10.322 "name": "BaseBdev2", 00:15:10.322 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:15:10.322 "is_configured": true, 00:15:10.322 "data_offset": 2048, 00:15:10.322 "data_size": 63488 00:15:10.322 } 00:15:10.322 ] 00:15:10.322 }' 00:15:10.322 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.322 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.580 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:10.580 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.580 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.580 [2024-11-20 14:25:49.531178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:10.580 [2024-11-20 14:25:49.531438] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:10.580 [2024-11-20 14:25:49.531461] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:10.580 [2024-11-20 14:25:49.531520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:10.580 [2024-11-20 14:25:49.547933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:15:10.580 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.580 14:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:10.580 [2024-11-20 14:25:49.550528] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:11.954 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.954 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.954 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.954 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.954 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.954 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.954 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.955 "name": "raid_bdev1", 00:15:11.955 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:15:11.955 "strip_size_kb": 0, 00:15:11.955 "state": "online", 00:15:11.955 "raid_level": "raid1", 00:15:11.955 "superblock": true, 00:15:11.955 "num_base_bdevs": 2, 00:15:11.955 "num_base_bdevs_discovered": 2, 00:15:11.955 "num_base_bdevs_operational": 2, 00:15:11.955 "process": { 00:15:11.955 "type": "rebuild", 00:15:11.955 "target": "spare", 00:15:11.955 "progress": { 00:15:11.955 "blocks": 20480, 00:15:11.955 "percent": 32 00:15:11.955 } 00:15:11.955 }, 00:15:11.955 "base_bdevs_list": [ 00:15:11.955 { 00:15:11.955 "name": "spare", 00:15:11.955 "uuid": "1a502574-3580-5aa9-bf64-fe3b04e1c091", 00:15:11.955 "is_configured": true, 00:15:11.955 "data_offset": 2048, 00:15:11.955 "data_size": 63488 00:15:11.955 }, 00:15:11.955 { 00:15:11.955 "name": "BaseBdev2", 00:15:11.955 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:15:11.955 "is_configured": true, 00:15:11.955 "data_offset": 2048, 00:15:11.955 "data_size": 63488 00:15:11.955 } 00:15:11.955 ] 00:15:11.955 }' 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.955 [2024-11-20 14:25:50.715880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:11.955 [2024-11-20 14:25:50.759832] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:11.955 [2024-11-20 14:25:50.759945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.955 [2024-11-20 14:25:50.759974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:11.955 [2024-11-20 14:25:50.760015] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.955 "name": "raid_bdev1", 00:15:11.955 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:15:11.955 "strip_size_kb": 0, 00:15:11.955 "state": "online", 00:15:11.955 "raid_level": "raid1", 00:15:11.955 "superblock": true, 00:15:11.955 "num_base_bdevs": 2, 00:15:11.955 "num_base_bdevs_discovered": 1, 00:15:11.955 "num_base_bdevs_operational": 1, 00:15:11.955 "base_bdevs_list": [ 00:15:11.955 { 00:15:11.955 "name": null, 00:15:11.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.955 "is_configured": false, 00:15:11.955 "data_offset": 0, 00:15:11.955 "data_size": 63488 00:15:11.955 }, 00:15:11.955 { 00:15:11.955 "name": "BaseBdev2", 00:15:11.955 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:15:11.955 "is_configured": true, 00:15:11.955 "data_offset": 2048, 00:15:11.955 "data_size": 63488 00:15:11.955 } 00:15:11.955 ] 00:15:11.955 }' 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.955 14:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.520 14:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:12.520 14:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.520 14:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.520 [2024-11-20 14:25:51.306896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:12.520 [2024-11-20 14:25:51.306998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.520 [2024-11-20 14:25:51.307036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:12.520 [2024-11-20 14:25:51.307050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.520 [2024-11-20 14:25:51.307692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.520 [2024-11-20 14:25:51.307735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:12.520 [2024-11-20 14:25:51.307860] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:12.520 [2024-11-20 14:25:51.307881] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:12.520 [2024-11-20 14:25:51.307897] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:12.520 [2024-11-20 14:25:51.307936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:12.520 [2024-11-20 14:25:51.324182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:15:12.520 spare 00:15:12.520 14:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.520 14:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:12.520 [2024-11-20 14:25:51.326744] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:13.456 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.456 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.456 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.456 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.456 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.456 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.456 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.456 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.456 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.456 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.456 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.456 "name": "raid_bdev1", 00:15:13.456 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:15:13.456 "strip_size_kb": 0, 00:15:13.456 "state": "online", 00:15:13.456 "raid_level": "raid1", 00:15:13.456 "superblock": true, 00:15:13.456 "num_base_bdevs": 2, 00:15:13.456 "num_base_bdevs_discovered": 2, 00:15:13.456 "num_base_bdevs_operational": 2, 00:15:13.456 "process": { 00:15:13.456 "type": "rebuild", 00:15:13.456 "target": "spare", 00:15:13.456 "progress": { 00:15:13.456 "blocks": 20480, 00:15:13.456 "percent": 32 00:15:13.456 } 00:15:13.456 }, 00:15:13.456 "base_bdevs_list": [ 00:15:13.456 { 00:15:13.456 "name": "spare", 00:15:13.456 "uuid": "1a502574-3580-5aa9-bf64-fe3b04e1c091", 00:15:13.456 "is_configured": true, 00:15:13.456 "data_offset": 2048, 00:15:13.456 "data_size": 63488 00:15:13.456 }, 00:15:13.456 { 00:15:13.456 "name": "BaseBdev2", 00:15:13.456 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:15:13.456 "is_configured": true, 00:15:13.456 "data_offset": 2048, 00:15:13.456 "data_size": 63488 00:15:13.456 } 00:15:13.456 ] 00:15:13.456 }' 00:15:13.456 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.715 [2024-11-20 14:25:52.492155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.715 [2024-11-20 14:25:52.535765] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:13.715 [2024-11-20 14:25:52.535881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.715 [2024-11-20 14:25:52.535905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.715 [2024-11-20 14:25:52.535923] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.715 "name": "raid_bdev1", 00:15:13.715 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:15:13.715 "strip_size_kb": 0, 00:15:13.715 "state": "online", 00:15:13.715 "raid_level": "raid1", 00:15:13.715 "superblock": true, 00:15:13.715 "num_base_bdevs": 2, 00:15:13.715 "num_base_bdevs_discovered": 1, 00:15:13.715 "num_base_bdevs_operational": 1, 00:15:13.715 "base_bdevs_list": [ 00:15:13.715 { 00:15:13.715 "name": null, 00:15:13.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.715 "is_configured": false, 00:15:13.715 "data_offset": 0, 00:15:13.715 "data_size": 63488 00:15:13.715 }, 00:15:13.715 { 00:15:13.715 "name": "BaseBdev2", 00:15:13.715 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:15:13.715 "is_configured": true, 00:15:13.715 "data_offset": 2048, 00:15:13.715 "data_size": 63488 00:15:13.715 } 00:15:13.715 ] 00:15:13.715 }' 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.715 14:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.282 14:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.282 14:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.282 14:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.282 14:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.282 14:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.282 14:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.282 14:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.282 14:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.282 14:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.282 14:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.282 14:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.282 "name": "raid_bdev1", 00:15:14.282 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:15:14.282 "strip_size_kb": 0, 00:15:14.282 "state": "online", 00:15:14.282 "raid_level": "raid1", 00:15:14.282 "superblock": true, 00:15:14.282 "num_base_bdevs": 2, 00:15:14.282 "num_base_bdevs_discovered": 1, 00:15:14.282 "num_base_bdevs_operational": 1, 00:15:14.282 "base_bdevs_list": [ 00:15:14.282 { 00:15:14.282 "name": null, 00:15:14.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.282 "is_configured": false, 00:15:14.282 "data_offset": 0, 00:15:14.282 "data_size": 63488 00:15:14.282 }, 00:15:14.282 { 00:15:14.282 "name": "BaseBdev2", 00:15:14.282 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:15:14.282 "is_configured": true, 00:15:14.282 "data_offset": 2048, 00:15:14.282 "data_size": 63488 00:15:14.282 } 00:15:14.282 ] 00:15:14.282 }' 00:15:14.282 14:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.282 14:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.282 14:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.282 14:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:14.282 14:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:14.282 14:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.282 14:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.282 14:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.282 14:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:14.282 14:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.282 14:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.282 [2024-11-20 14:25:53.207212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:14.282 [2024-11-20 14:25:53.207291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.282 [2024-11-20 14:25:53.207330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:14.282 [2024-11-20 14:25:53.207351] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.282 [2024-11-20 14:25:53.207909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.282 [2024-11-20 14:25:53.207952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:14.282 [2024-11-20 14:25:53.208077] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:14.282 [2024-11-20 14:25:53.208108] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:14.282 [2024-11-20 14:25:53.208120] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:14.282 [2024-11-20 14:25:53.208135] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:14.282 BaseBdev1 00:15:14.282 14:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.282 14:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:15.657 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:15.657 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.657 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.657 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.657 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.657 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:15.657 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.657 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.657 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.657 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.657 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.657 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.657 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.657 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.658 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.658 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.658 "name": "raid_bdev1", 00:15:15.658 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:15:15.658 "strip_size_kb": 0, 00:15:15.658 "state": "online", 00:15:15.658 "raid_level": "raid1", 00:15:15.658 "superblock": true, 00:15:15.658 "num_base_bdevs": 2, 00:15:15.658 "num_base_bdevs_discovered": 1, 00:15:15.658 "num_base_bdevs_operational": 1, 00:15:15.658 "base_bdevs_list": [ 00:15:15.658 { 00:15:15.658 "name": null, 00:15:15.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.658 "is_configured": false, 00:15:15.658 "data_offset": 0, 00:15:15.658 "data_size": 63488 00:15:15.658 }, 00:15:15.658 { 00:15:15.658 "name": "BaseBdev2", 00:15:15.658 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:15:15.658 "is_configured": true, 00:15:15.658 "data_offset": 2048, 00:15:15.658 "data_size": 63488 00:15:15.658 } 00:15:15.658 ] 00:15:15.658 }' 00:15:15.658 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.658 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.915 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:15.915 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.915 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:15.915 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:15.915 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.915 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.915 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.915 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.915 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.915 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.915 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.915 "name": "raid_bdev1", 00:15:15.915 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:15:15.915 "strip_size_kb": 0, 00:15:15.915 "state": "online", 00:15:15.915 "raid_level": "raid1", 00:15:15.915 "superblock": true, 00:15:15.915 "num_base_bdevs": 2, 00:15:15.915 "num_base_bdevs_discovered": 1, 00:15:15.915 "num_base_bdevs_operational": 1, 00:15:15.915 "base_bdevs_list": [ 00:15:15.915 { 00:15:15.915 "name": null, 00:15:15.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.915 "is_configured": false, 00:15:15.915 "data_offset": 0, 00:15:15.916 "data_size": 63488 00:15:15.916 }, 00:15:15.916 { 00:15:15.916 "name": "BaseBdev2", 00:15:15.916 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:15:15.916 "is_configured": true, 00:15:15.916 "data_offset": 2048, 00:15:15.916 "data_size": 63488 00:15:15.916 } 00:15:15.916 ] 00:15:15.916 }' 00:15:15.916 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.916 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:15.916 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.916 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:15.916 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:15.916 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:15:15.916 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:15.916 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:15.916 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.916 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:15.916 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.916 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:15.916 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.916 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.916 [2024-11-20 14:25:54.887882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.916 [2024-11-20 14:25:54.888107] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:15.916 [2024-11-20 14:25:54.888128] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:15.916 request: 00:15:15.916 { 00:15:15.916 "base_bdev": "BaseBdev1", 00:15:15.916 "raid_bdev": "raid_bdev1", 00:15:15.916 "method": "bdev_raid_add_base_bdev", 00:15:15.916 "req_id": 1 00:15:15.916 } 00:15:15.916 Got JSON-RPC error response 00:15:15.916 response: 00:15:15.916 { 00:15:15.916 "code": -22, 00:15:15.916 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:15.916 } 00:15:15.916 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:15.916 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:15:15.916 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:15.916 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:15.916 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:16.174 14:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:17.107 14:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:17.107 14:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.107 14:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.107 14:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.107 14:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.107 14:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:17.107 14:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.107 14:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.107 14:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.107 14:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.107 14:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.107 14:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.107 14:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.107 14:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.107 14:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.107 14:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.107 "name": "raid_bdev1", 00:15:17.107 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:15:17.107 "strip_size_kb": 0, 00:15:17.107 "state": "online", 00:15:17.107 "raid_level": "raid1", 00:15:17.107 "superblock": true, 00:15:17.107 "num_base_bdevs": 2, 00:15:17.107 "num_base_bdevs_discovered": 1, 00:15:17.107 "num_base_bdevs_operational": 1, 00:15:17.108 "base_bdevs_list": [ 00:15:17.108 { 00:15:17.108 "name": null, 00:15:17.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.108 "is_configured": false, 00:15:17.108 "data_offset": 0, 00:15:17.108 "data_size": 63488 00:15:17.108 }, 00:15:17.108 { 00:15:17.108 "name": "BaseBdev2", 00:15:17.108 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:15:17.108 "is_configured": true, 00:15:17.108 "data_offset": 2048, 00:15:17.108 "data_size": 63488 00:15:17.108 } 00:15:17.108 ] 00:15:17.108 }' 00:15:17.108 14:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.108 14:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.673 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:17.673 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.673 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:17.673 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:17.673 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.673 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.673 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.673 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.673 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.673 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.673 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.673 "name": "raid_bdev1", 00:15:17.673 "uuid": "63d938aa-7146-422b-b2c0-50a2aac1694e", 00:15:17.673 "strip_size_kb": 0, 00:15:17.673 "state": "online", 00:15:17.673 "raid_level": "raid1", 00:15:17.673 "superblock": true, 00:15:17.673 "num_base_bdevs": 2, 00:15:17.673 "num_base_bdevs_discovered": 1, 00:15:17.673 "num_base_bdevs_operational": 1, 00:15:17.673 "base_bdevs_list": [ 00:15:17.673 { 00:15:17.673 "name": null, 00:15:17.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.673 "is_configured": false, 00:15:17.673 "data_offset": 0, 00:15:17.673 "data_size": 63488 00:15:17.673 }, 00:15:17.673 { 00:15:17.673 "name": "BaseBdev2", 00:15:17.673 "uuid": "3b642d12-52d6-59b9-bb41-6459d97a37d3", 00:15:17.673 "is_configured": true, 00:15:17.673 "data_offset": 2048, 00:15:17.673 "data_size": 63488 00:15:17.673 } 00:15:17.673 ] 00:15:17.673 }' 00:15:17.673 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.673 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:17.673 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.673 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:17.673 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77097 00:15:17.674 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77097 ']' 00:15:17.674 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77097 00:15:17.674 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:15:17.674 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:17.674 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77097 00:15:17.674 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:17.674 killing process with pid 77097 00:15:17.674 Received shutdown signal, test time was about 19.066490 seconds 00:15:17.674 00:15:17.674 Latency(us) 00:15:17.674 [2024-11-20T14:25:56.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.674 [2024-11-20T14:25:56.656Z] =================================================================================================================== 00:15:17.674 [2024-11-20T14:25:56.656Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:17.674 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:17.674 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77097' 00:15:17.674 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77097 00:15:17.674 [2024-11-20 14:25:56.564505] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:17.674 14:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77097 00:15:17.674 [2024-11-20 14:25:56.564680] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.674 [2024-11-20 14:25:56.564763] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:17.674 [2024-11-20 14:25:56.564779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:17.931 [2024-11-20 14:25:56.768995] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:19.307 00:15:19.307 real 0m22.403s 00:15:19.307 user 0m30.063s 00:15:19.307 sys 0m1.948s 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:19.307 ************************************ 00:15:19.307 END TEST raid_rebuild_test_sb_io 00:15:19.307 ************************************ 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.307 14:25:57 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:15:19.307 14:25:57 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:15:19.307 14:25:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:19.307 14:25:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:19.307 14:25:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:19.307 ************************************ 00:15:19.307 START TEST raid_rebuild_test 00:15:19.307 ************************************ 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:19.307 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:19.308 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:19.308 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:19.308 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:19.308 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:19.308 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77816 00:15:19.308 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77816 00:15:19.308 14:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:19.308 14:25:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77816 ']' 00:15:19.308 14:25:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.308 14:25:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:19.308 14:25:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.308 14:25:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:19.308 14:25:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.308 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:19.308 Zero copy mechanism will not be used. 00:15:19.308 [2024-11-20 14:25:58.056521] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:15:19.308 [2024-11-20 14:25:58.056690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77816 ] 00:15:19.308 [2024-11-20 14:25:58.238680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.567 [2024-11-20 14:25:58.371240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.825 [2024-11-20 14:25:58.582331] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.825 [2024-11-20 14:25:58.582402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.083 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:20.083 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:20.083 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:20.083 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:20.083 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.083 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.083 BaseBdev1_malloc 00:15:20.083 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.083 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:20.083 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.083 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.348 [2024-11-20 14:25:59.069066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:20.348 [2024-11-20 14:25:59.069304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.348 [2024-11-20 14:25:59.069349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:20.348 [2024-11-20 14:25:59.069371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.348 [2024-11-20 14:25:59.072300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.348 [2024-11-20 14:25:59.072492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:20.348 BaseBdev1 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.348 BaseBdev2_malloc 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.348 [2024-11-20 14:25:59.122035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:20.348 [2024-11-20 14:25:59.122116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.348 [2024-11-20 14:25:59.122151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:20.348 [2024-11-20 14:25:59.122171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.348 [2024-11-20 14:25:59.124995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.348 [2024-11-20 14:25:59.125082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:20.348 BaseBdev2 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.348 BaseBdev3_malloc 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.348 [2024-11-20 14:25:59.187991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:20.348 [2024-11-20 14:25:59.188087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.348 [2024-11-20 14:25:59.188120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:20.348 [2024-11-20 14:25:59.188140] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.348 [2024-11-20 14:25:59.190828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.348 [2024-11-20 14:25:59.190895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:20.348 BaseBdev3 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.348 BaseBdev4_malloc 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.348 [2024-11-20 14:25:59.240697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:20.348 [2024-11-20 14:25:59.240801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.348 [2024-11-20 14:25:59.240847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:20.348 [2024-11-20 14:25:59.240875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.348 [2024-11-20 14:25:59.244557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.348 [2024-11-20 14:25:59.244803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:20.348 BaseBdev4 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.348 spare_malloc 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.348 spare_delay 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.348 [2024-11-20 14:25:59.303572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:20.348 [2024-11-20 14:25:59.303652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.348 [2024-11-20 14:25:59.303682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:20.348 [2024-11-20 14:25:59.303701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.348 [2024-11-20 14:25:59.306534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.348 [2024-11-20 14:25:59.306587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:20.348 spare 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.348 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.348 [2024-11-20 14:25:59.315624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:20.348 [2024-11-20 14:25:59.318138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:20.348 [2024-11-20 14:25:59.318230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:20.349 [2024-11-20 14:25:59.318315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:20.349 [2024-11-20 14:25:59.318431] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:20.349 [2024-11-20 14:25:59.318456] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:20.349 [2024-11-20 14:25:59.318784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:20.349 [2024-11-20 14:25:59.319069] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:20.349 [2024-11-20 14:25:59.319091] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:20.349 [2024-11-20 14:25:59.319295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.349 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.349 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:20.349 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.349 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.349 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.349 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.349 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:20.349 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.349 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.349 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.349 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.608 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.608 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.608 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.608 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.608 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.608 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.608 "name": "raid_bdev1", 00:15:20.608 "uuid": "e58e2f76-b15a-49b6-b180-362e1024d091", 00:15:20.608 "strip_size_kb": 0, 00:15:20.608 "state": "online", 00:15:20.608 "raid_level": "raid1", 00:15:20.608 "superblock": false, 00:15:20.608 "num_base_bdevs": 4, 00:15:20.608 "num_base_bdevs_discovered": 4, 00:15:20.608 "num_base_bdevs_operational": 4, 00:15:20.608 "base_bdevs_list": [ 00:15:20.608 { 00:15:20.608 "name": "BaseBdev1", 00:15:20.608 "uuid": "8952bd39-6f68-5d88-91e5-76edd6df4ec7", 00:15:20.608 "is_configured": true, 00:15:20.608 "data_offset": 0, 00:15:20.608 "data_size": 65536 00:15:20.608 }, 00:15:20.608 { 00:15:20.608 "name": "BaseBdev2", 00:15:20.608 "uuid": "b55f7b2c-4207-5e32-8b32-f2c1498c30a0", 00:15:20.608 "is_configured": true, 00:15:20.608 "data_offset": 0, 00:15:20.608 "data_size": 65536 00:15:20.608 }, 00:15:20.608 { 00:15:20.608 "name": "BaseBdev3", 00:15:20.608 "uuid": "726f8497-b506-5df5-92a2-fb97b4db14fa", 00:15:20.608 "is_configured": true, 00:15:20.608 "data_offset": 0, 00:15:20.608 "data_size": 65536 00:15:20.608 }, 00:15:20.608 { 00:15:20.608 "name": "BaseBdev4", 00:15:20.608 "uuid": "7089338f-1d19-56a0-bd27-cc7a8cc3c49e", 00:15:20.608 "is_configured": true, 00:15:20.608 "data_offset": 0, 00:15:20.608 "data_size": 65536 00:15:20.608 } 00:15:20.608 ] 00:15:20.608 }' 00:15:20.608 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.608 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.867 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:20.867 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.867 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:20.867 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.867 [2024-11-20 14:25:59.832208] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:21.125 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.125 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:21.125 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.125 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.125 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.125 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:21.125 14:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.125 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:21.125 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:21.125 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:21.125 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:21.125 14:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:21.125 14:25:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:21.125 14:25:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:21.125 14:25:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:21.125 14:25:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:21.125 14:25:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:21.125 14:25:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:21.125 14:25:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:21.125 14:25:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:21.125 14:25:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:21.384 [2024-11-20 14:26:00.280003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:21.384 /dev/nbd0 00:15:21.384 14:26:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:21.384 14:26:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:21.384 14:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:21.384 14:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:21.384 14:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:21.384 14:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:21.384 14:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:21.384 14:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:21.384 14:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:21.384 14:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:21.384 14:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:21.384 1+0 records in 00:15:21.384 1+0 records out 00:15:21.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325832 s, 12.6 MB/s 00:15:21.384 14:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.384 14:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:21.384 14:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.384 14:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:21.384 14:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:21.384 14:26:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:21.384 14:26:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:21.384 14:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:21.384 14:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:21.384 14:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:15:31.364 65536+0 records in 00:15:31.364 65536+0 records out 00:15:31.364 33554432 bytes (34 MB, 32 MiB) copied, 8.27619 s, 4.1 MB/s 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:31.364 [2024-11-20 14:26:08.896781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.364 [2024-11-20 14:26:08.928883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.364 "name": "raid_bdev1", 00:15:31.364 "uuid": "e58e2f76-b15a-49b6-b180-362e1024d091", 00:15:31.364 "strip_size_kb": 0, 00:15:31.364 "state": "online", 00:15:31.364 "raid_level": "raid1", 00:15:31.364 "superblock": false, 00:15:31.364 "num_base_bdevs": 4, 00:15:31.364 "num_base_bdevs_discovered": 3, 00:15:31.364 "num_base_bdevs_operational": 3, 00:15:31.364 "base_bdevs_list": [ 00:15:31.364 { 00:15:31.364 "name": null, 00:15:31.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.364 "is_configured": false, 00:15:31.364 "data_offset": 0, 00:15:31.364 "data_size": 65536 00:15:31.364 }, 00:15:31.364 { 00:15:31.364 "name": "BaseBdev2", 00:15:31.364 "uuid": "b55f7b2c-4207-5e32-8b32-f2c1498c30a0", 00:15:31.364 "is_configured": true, 00:15:31.364 "data_offset": 0, 00:15:31.364 "data_size": 65536 00:15:31.364 }, 00:15:31.364 { 00:15:31.364 "name": "BaseBdev3", 00:15:31.364 "uuid": "726f8497-b506-5df5-92a2-fb97b4db14fa", 00:15:31.364 "is_configured": true, 00:15:31.364 "data_offset": 0, 00:15:31.364 "data_size": 65536 00:15:31.364 }, 00:15:31.364 { 00:15:31.364 "name": "BaseBdev4", 00:15:31.364 "uuid": "7089338f-1d19-56a0-bd27-cc7a8cc3c49e", 00:15:31.364 "is_configured": true, 00:15:31.364 "data_offset": 0, 00:15:31.364 "data_size": 65536 00:15:31.364 } 00:15:31.364 ] 00:15:31.364 }' 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.364 14:26:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.364 14:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:31.364 14:26:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.364 14:26:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.364 [2024-11-20 14:26:09.397010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:31.364 [2024-11-20 14:26:09.411278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:15:31.364 14:26:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.364 14:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:31.364 [2024-11-20 14:26:09.413798] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:31.623 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.623 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.623 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.623 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.623 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.623 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.623 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.623 14:26:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.623 14:26:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.623 14:26:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.623 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.623 "name": "raid_bdev1", 00:15:31.623 "uuid": "e58e2f76-b15a-49b6-b180-362e1024d091", 00:15:31.623 "strip_size_kb": 0, 00:15:31.623 "state": "online", 00:15:31.623 "raid_level": "raid1", 00:15:31.623 "superblock": false, 00:15:31.623 "num_base_bdevs": 4, 00:15:31.623 "num_base_bdevs_discovered": 4, 00:15:31.623 "num_base_bdevs_operational": 4, 00:15:31.623 "process": { 00:15:31.623 "type": "rebuild", 00:15:31.623 "target": "spare", 00:15:31.623 "progress": { 00:15:31.623 "blocks": 20480, 00:15:31.623 "percent": 31 00:15:31.623 } 00:15:31.623 }, 00:15:31.623 "base_bdevs_list": [ 00:15:31.623 { 00:15:31.623 "name": "spare", 00:15:31.623 "uuid": "7c630a74-c4d6-513e-bd55-b6c986d5205c", 00:15:31.623 "is_configured": true, 00:15:31.623 "data_offset": 0, 00:15:31.623 "data_size": 65536 00:15:31.623 }, 00:15:31.623 { 00:15:31.623 "name": "BaseBdev2", 00:15:31.623 "uuid": "b55f7b2c-4207-5e32-8b32-f2c1498c30a0", 00:15:31.623 "is_configured": true, 00:15:31.623 "data_offset": 0, 00:15:31.623 "data_size": 65536 00:15:31.623 }, 00:15:31.623 { 00:15:31.623 "name": "BaseBdev3", 00:15:31.623 "uuid": "726f8497-b506-5df5-92a2-fb97b4db14fa", 00:15:31.623 "is_configured": true, 00:15:31.623 "data_offset": 0, 00:15:31.623 "data_size": 65536 00:15:31.623 }, 00:15:31.623 { 00:15:31.623 "name": "BaseBdev4", 00:15:31.623 "uuid": "7089338f-1d19-56a0-bd27-cc7a8cc3c49e", 00:15:31.623 "is_configured": true, 00:15:31.623 "data_offset": 0, 00:15:31.623 "data_size": 65536 00:15:31.623 } 00:15:31.623 ] 00:15:31.623 }' 00:15:31.623 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.623 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.623 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.623 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.623 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:31.623 14:26:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.623 14:26:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.623 [2024-11-20 14:26:10.575326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:31.882 [2024-11-20 14:26:10.622543] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:31.882 [2024-11-20 14:26:10.622630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.882 [2024-11-20 14:26:10.622657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:31.882 [2024-11-20 14:26:10.622672] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:31.882 14:26:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.882 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:31.882 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.882 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.882 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.882 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.882 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.882 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.882 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.882 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.882 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.882 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.882 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.882 14:26:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.882 14:26:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.882 14:26:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.882 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.882 "name": "raid_bdev1", 00:15:31.882 "uuid": "e58e2f76-b15a-49b6-b180-362e1024d091", 00:15:31.882 "strip_size_kb": 0, 00:15:31.882 "state": "online", 00:15:31.882 "raid_level": "raid1", 00:15:31.882 "superblock": false, 00:15:31.882 "num_base_bdevs": 4, 00:15:31.882 "num_base_bdevs_discovered": 3, 00:15:31.882 "num_base_bdevs_operational": 3, 00:15:31.882 "base_bdevs_list": [ 00:15:31.882 { 00:15:31.882 "name": null, 00:15:31.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.882 "is_configured": false, 00:15:31.882 "data_offset": 0, 00:15:31.882 "data_size": 65536 00:15:31.882 }, 00:15:31.882 { 00:15:31.882 "name": "BaseBdev2", 00:15:31.882 "uuid": "b55f7b2c-4207-5e32-8b32-f2c1498c30a0", 00:15:31.882 "is_configured": true, 00:15:31.882 "data_offset": 0, 00:15:31.882 "data_size": 65536 00:15:31.882 }, 00:15:31.882 { 00:15:31.882 "name": "BaseBdev3", 00:15:31.882 "uuid": "726f8497-b506-5df5-92a2-fb97b4db14fa", 00:15:31.882 "is_configured": true, 00:15:31.882 "data_offset": 0, 00:15:31.882 "data_size": 65536 00:15:31.882 }, 00:15:31.882 { 00:15:31.882 "name": "BaseBdev4", 00:15:31.882 "uuid": "7089338f-1d19-56a0-bd27-cc7a8cc3c49e", 00:15:31.882 "is_configured": true, 00:15:31.883 "data_offset": 0, 00:15:31.883 "data_size": 65536 00:15:31.883 } 00:15:31.883 ] 00:15:31.883 }' 00:15:31.883 14:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.883 14:26:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.450 14:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:32.450 14:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.450 14:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:32.450 14:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:32.450 14:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.450 14:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.450 14:26:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.450 14:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.450 14:26:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.450 14:26:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.450 14:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.450 "name": "raid_bdev1", 00:15:32.450 "uuid": "e58e2f76-b15a-49b6-b180-362e1024d091", 00:15:32.450 "strip_size_kb": 0, 00:15:32.450 "state": "online", 00:15:32.450 "raid_level": "raid1", 00:15:32.450 "superblock": false, 00:15:32.450 "num_base_bdevs": 4, 00:15:32.450 "num_base_bdevs_discovered": 3, 00:15:32.450 "num_base_bdevs_operational": 3, 00:15:32.450 "base_bdevs_list": [ 00:15:32.450 { 00:15:32.450 "name": null, 00:15:32.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.450 "is_configured": false, 00:15:32.450 "data_offset": 0, 00:15:32.450 "data_size": 65536 00:15:32.450 }, 00:15:32.450 { 00:15:32.450 "name": "BaseBdev2", 00:15:32.450 "uuid": "b55f7b2c-4207-5e32-8b32-f2c1498c30a0", 00:15:32.450 "is_configured": true, 00:15:32.450 "data_offset": 0, 00:15:32.450 "data_size": 65536 00:15:32.450 }, 00:15:32.450 { 00:15:32.450 "name": "BaseBdev3", 00:15:32.450 "uuid": "726f8497-b506-5df5-92a2-fb97b4db14fa", 00:15:32.450 "is_configured": true, 00:15:32.450 "data_offset": 0, 00:15:32.450 "data_size": 65536 00:15:32.450 }, 00:15:32.450 { 00:15:32.450 "name": "BaseBdev4", 00:15:32.450 "uuid": "7089338f-1d19-56a0-bd27-cc7a8cc3c49e", 00:15:32.450 "is_configured": true, 00:15:32.450 "data_offset": 0, 00:15:32.450 "data_size": 65536 00:15:32.450 } 00:15:32.450 ] 00:15:32.450 }' 00:15:32.450 14:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.450 14:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:32.450 14:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.450 14:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:32.450 14:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:32.450 14:26:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.450 14:26:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.450 [2024-11-20 14:26:11.338450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:32.450 [2024-11-20 14:26:11.352305] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:15:32.450 14:26:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.450 14:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:32.450 [2024-11-20 14:26:11.355054] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:33.388 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:33.388 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.388 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:33.388 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:33.388 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.388 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.388 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.388 14:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.388 14:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.647 "name": "raid_bdev1", 00:15:33.647 "uuid": "e58e2f76-b15a-49b6-b180-362e1024d091", 00:15:33.647 "strip_size_kb": 0, 00:15:33.647 "state": "online", 00:15:33.647 "raid_level": "raid1", 00:15:33.647 "superblock": false, 00:15:33.647 "num_base_bdevs": 4, 00:15:33.647 "num_base_bdevs_discovered": 4, 00:15:33.647 "num_base_bdevs_operational": 4, 00:15:33.647 "process": { 00:15:33.647 "type": "rebuild", 00:15:33.647 "target": "spare", 00:15:33.647 "progress": { 00:15:33.647 "blocks": 20480, 00:15:33.647 "percent": 31 00:15:33.647 } 00:15:33.647 }, 00:15:33.647 "base_bdevs_list": [ 00:15:33.647 { 00:15:33.647 "name": "spare", 00:15:33.647 "uuid": "7c630a74-c4d6-513e-bd55-b6c986d5205c", 00:15:33.647 "is_configured": true, 00:15:33.647 "data_offset": 0, 00:15:33.647 "data_size": 65536 00:15:33.647 }, 00:15:33.647 { 00:15:33.647 "name": "BaseBdev2", 00:15:33.647 "uuid": "b55f7b2c-4207-5e32-8b32-f2c1498c30a0", 00:15:33.647 "is_configured": true, 00:15:33.647 "data_offset": 0, 00:15:33.647 "data_size": 65536 00:15:33.647 }, 00:15:33.647 { 00:15:33.647 "name": "BaseBdev3", 00:15:33.647 "uuid": "726f8497-b506-5df5-92a2-fb97b4db14fa", 00:15:33.647 "is_configured": true, 00:15:33.647 "data_offset": 0, 00:15:33.647 "data_size": 65536 00:15:33.647 }, 00:15:33.647 { 00:15:33.647 "name": "BaseBdev4", 00:15:33.647 "uuid": "7089338f-1d19-56a0-bd27-cc7a8cc3c49e", 00:15:33.647 "is_configured": true, 00:15:33.647 "data_offset": 0, 00:15:33.647 "data_size": 65536 00:15:33.647 } 00:15:33.647 ] 00:15:33.647 }' 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.647 [2024-11-20 14:26:12.520228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:33.647 [2024-11-20 14:26:12.564056] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.647 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.648 14:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.648 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.648 "name": "raid_bdev1", 00:15:33.648 "uuid": "e58e2f76-b15a-49b6-b180-362e1024d091", 00:15:33.648 "strip_size_kb": 0, 00:15:33.648 "state": "online", 00:15:33.648 "raid_level": "raid1", 00:15:33.648 "superblock": false, 00:15:33.648 "num_base_bdevs": 4, 00:15:33.648 "num_base_bdevs_discovered": 3, 00:15:33.648 "num_base_bdevs_operational": 3, 00:15:33.648 "process": { 00:15:33.648 "type": "rebuild", 00:15:33.648 "target": "spare", 00:15:33.648 "progress": { 00:15:33.648 "blocks": 24576, 00:15:33.648 "percent": 37 00:15:33.648 } 00:15:33.648 }, 00:15:33.648 "base_bdevs_list": [ 00:15:33.648 { 00:15:33.648 "name": "spare", 00:15:33.648 "uuid": "7c630a74-c4d6-513e-bd55-b6c986d5205c", 00:15:33.648 "is_configured": true, 00:15:33.648 "data_offset": 0, 00:15:33.648 "data_size": 65536 00:15:33.648 }, 00:15:33.648 { 00:15:33.648 "name": null, 00:15:33.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.648 "is_configured": false, 00:15:33.648 "data_offset": 0, 00:15:33.648 "data_size": 65536 00:15:33.648 }, 00:15:33.648 { 00:15:33.648 "name": "BaseBdev3", 00:15:33.648 "uuid": "726f8497-b506-5df5-92a2-fb97b4db14fa", 00:15:33.648 "is_configured": true, 00:15:33.648 "data_offset": 0, 00:15:33.648 "data_size": 65536 00:15:33.648 }, 00:15:33.648 { 00:15:33.648 "name": "BaseBdev4", 00:15:33.648 "uuid": "7089338f-1d19-56a0-bd27-cc7a8cc3c49e", 00:15:33.648 "is_configured": true, 00:15:33.648 "data_offset": 0, 00:15:33.648 "data_size": 65536 00:15:33.648 } 00:15:33.648 ] 00:15:33.648 }' 00:15:33.648 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.907 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:33.907 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.907 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:33.907 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=479 00:15:33.907 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:33.907 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:33.907 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.907 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:33.907 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:33.907 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.907 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.907 14:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.907 14:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.907 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.907 14:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.907 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.907 "name": "raid_bdev1", 00:15:33.907 "uuid": "e58e2f76-b15a-49b6-b180-362e1024d091", 00:15:33.907 "strip_size_kb": 0, 00:15:33.907 "state": "online", 00:15:33.907 "raid_level": "raid1", 00:15:33.907 "superblock": false, 00:15:33.907 "num_base_bdevs": 4, 00:15:33.907 "num_base_bdevs_discovered": 3, 00:15:33.907 "num_base_bdevs_operational": 3, 00:15:33.907 "process": { 00:15:33.907 "type": "rebuild", 00:15:33.907 "target": "spare", 00:15:33.907 "progress": { 00:15:33.907 "blocks": 26624, 00:15:33.907 "percent": 40 00:15:33.907 } 00:15:33.907 }, 00:15:33.907 "base_bdevs_list": [ 00:15:33.907 { 00:15:33.907 "name": "spare", 00:15:33.907 "uuid": "7c630a74-c4d6-513e-bd55-b6c986d5205c", 00:15:33.907 "is_configured": true, 00:15:33.907 "data_offset": 0, 00:15:33.907 "data_size": 65536 00:15:33.907 }, 00:15:33.907 { 00:15:33.907 "name": null, 00:15:33.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.907 "is_configured": false, 00:15:33.907 "data_offset": 0, 00:15:33.907 "data_size": 65536 00:15:33.907 }, 00:15:33.907 { 00:15:33.907 "name": "BaseBdev3", 00:15:33.907 "uuid": "726f8497-b506-5df5-92a2-fb97b4db14fa", 00:15:33.907 "is_configured": true, 00:15:33.907 "data_offset": 0, 00:15:33.907 "data_size": 65536 00:15:33.907 }, 00:15:33.907 { 00:15:33.907 "name": "BaseBdev4", 00:15:33.907 "uuid": "7089338f-1d19-56a0-bd27-cc7a8cc3c49e", 00:15:33.907 "is_configured": true, 00:15:33.907 "data_offset": 0, 00:15:33.907 "data_size": 65536 00:15:33.907 } 00:15:33.907 ] 00:15:33.907 }' 00:15:33.907 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.907 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:33.907 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.907 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:33.907 14:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:34.910 14:26:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:34.910 14:26:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.910 14:26:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.911 14:26:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.911 14:26:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.911 14:26:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.911 14:26:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.911 14:26:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.911 14:26:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.911 14:26:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.169 14:26:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.169 14:26:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.169 "name": "raid_bdev1", 00:15:35.169 "uuid": "e58e2f76-b15a-49b6-b180-362e1024d091", 00:15:35.169 "strip_size_kb": 0, 00:15:35.169 "state": "online", 00:15:35.169 "raid_level": "raid1", 00:15:35.169 "superblock": false, 00:15:35.169 "num_base_bdevs": 4, 00:15:35.169 "num_base_bdevs_discovered": 3, 00:15:35.169 "num_base_bdevs_operational": 3, 00:15:35.169 "process": { 00:15:35.169 "type": "rebuild", 00:15:35.169 "target": "spare", 00:15:35.169 "progress": { 00:15:35.169 "blocks": 51200, 00:15:35.169 "percent": 78 00:15:35.169 } 00:15:35.169 }, 00:15:35.169 "base_bdevs_list": [ 00:15:35.169 { 00:15:35.169 "name": "spare", 00:15:35.169 "uuid": "7c630a74-c4d6-513e-bd55-b6c986d5205c", 00:15:35.169 "is_configured": true, 00:15:35.169 "data_offset": 0, 00:15:35.169 "data_size": 65536 00:15:35.169 }, 00:15:35.169 { 00:15:35.169 "name": null, 00:15:35.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.169 "is_configured": false, 00:15:35.169 "data_offset": 0, 00:15:35.169 "data_size": 65536 00:15:35.169 }, 00:15:35.169 { 00:15:35.169 "name": "BaseBdev3", 00:15:35.169 "uuid": "726f8497-b506-5df5-92a2-fb97b4db14fa", 00:15:35.169 "is_configured": true, 00:15:35.169 "data_offset": 0, 00:15:35.169 "data_size": 65536 00:15:35.169 }, 00:15:35.169 { 00:15:35.169 "name": "BaseBdev4", 00:15:35.169 "uuid": "7089338f-1d19-56a0-bd27-cc7a8cc3c49e", 00:15:35.169 "is_configured": true, 00:15:35.169 "data_offset": 0, 00:15:35.169 "data_size": 65536 00:15:35.169 } 00:15:35.169 ] 00:15:35.169 }' 00:15:35.169 14:26:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.169 14:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:35.169 14:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.169 14:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.169 14:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:35.736 [2024-11-20 14:26:14.579063] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:35.736 [2024-11-20 14:26:14.579160] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:35.736 [2024-11-20 14:26:14.579227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.305 "name": "raid_bdev1", 00:15:36.305 "uuid": "e58e2f76-b15a-49b6-b180-362e1024d091", 00:15:36.305 "strip_size_kb": 0, 00:15:36.305 "state": "online", 00:15:36.305 "raid_level": "raid1", 00:15:36.305 "superblock": false, 00:15:36.305 "num_base_bdevs": 4, 00:15:36.305 "num_base_bdevs_discovered": 3, 00:15:36.305 "num_base_bdevs_operational": 3, 00:15:36.305 "base_bdevs_list": [ 00:15:36.305 { 00:15:36.305 "name": "spare", 00:15:36.305 "uuid": "7c630a74-c4d6-513e-bd55-b6c986d5205c", 00:15:36.305 "is_configured": true, 00:15:36.305 "data_offset": 0, 00:15:36.305 "data_size": 65536 00:15:36.305 }, 00:15:36.305 { 00:15:36.305 "name": null, 00:15:36.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.305 "is_configured": false, 00:15:36.305 "data_offset": 0, 00:15:36.305 "data_size": 65536 00:15:36.305 }, 00:15:36.305 { 00:15:36.305 "name": "BaseBdev3", 00:15:36.305 "uuid": "726f8497-b506-5df5-92a2-fb97b4db14fa", 00:15:36.305 "is_configured": true, 00:15:36.305 "data_offset": 0, 00:15:36.305 "data_size": 65536 00:15:36.305 }, 00:15:36.305 { 00:15:36.305 "name": "BaseBdev4", 00:15:36.305 "uuid": "7089338f-1d19-56a0-bd27-cc7a8cc3c49e", 00:15:36.305 "is_configured": true, 00:15:36.305 "data_offset": 0, 00:15:36.305 "data_size": 65536 00:15:36.305 } 00:15:36.305 ] 00:15:36.305 }' 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.305 "name": "raid_bdev1", 00:15:36.305 "uuid": "e58e2f76-b15a-49b6-b180-362e1024d091", 00:15:36.305 "strip_size_kb": 0, 00:15:36.305 "state": "online", 00:15:36.305 "raid_level": "raid1", 00:15:36.305 "superblock": false, 00:15:36.305 "num_base_bdevs": 4, 00:15:36.305 "num_base_bdevs_discovered": 3, 00:15:36.305 "num_base_bdevs_operational": 3, 00:15:36.305 "base_bdevs_list": [ 00:15:36.305 { 00:15:36.305 "name": "spare", 00:15:36.305 "uuid": "7c630a74-c4d6-513e-bd55-b6c986d5205c", 00:15:36.305 "is_configured": true, 00:15:36.305 "data_offset": 0, 00:15:36.305 "data_size": 65536 00:15:36.305 }, 00:15:36.305 { 00:15:36.305 "name": null, 00:15:36.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.305 "is_configured": false, 00:15:36.305 "data_offset": 0, 00:15:36.305 "data_size": 65536 00:15:36.305 }, 00:15:36.305 { 00:15:36.305 "name": "BaseBdev3", 00:15:36.305 "uuid": "726f8497-b506-5df5-92a2-fb97b4db14fa", 00:15:36.305 "is_configured": true, 00:15:36.305 "data_offset": 0, 00:15:36.305 "data_size": 65536 00:15:36.305 }, 00:15:36.305 { 00:15:36.305 "name": "BaseBdev4", 00:15:36.305 "uuid": "7089338f-1d19-56a0-bd27-cc7a8cc3c49e", 00:15:36.305 "is_configured": true, 00:15:36.305 "data_offset": 0, 00:15:36.305 "data_size": 65536 00:15:36.305 } 00:15:36.305 ] 00:15:36.305 }' 00:15:36.305 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.564 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:36.564 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.564 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:36.564 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:36.564 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.564 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.564 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.564 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.564 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.564 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.564 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.564 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.564 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.564 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.564 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.564 14:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.564 14:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.564 14:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.564 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.564 "name": "raid_bdev1", 00:15:36.564 "uuid": "e58e2f76-b15a-49b6-b180-362e1024d091", 00:15:36.564 "strip_size_kb": 0, 00:15:36.564 "state": "online", 00:15:36.564 "raid_level": "raid1", 00:15:36.564 "superblock": false, 00:15:36.564 "num_base_bdevs": 4, 00:15:36.564 "num_base_bdevs_discovered": 3, 00:15:36.564 "num_base_bdevs_operational": 3, 00:15:36.564 "base_bdevs_list": [ 00:15:36.564 { 00:15:36.564 "name": "spare", 00:15:36.564 "uuid": "7c630a74-c4d6-513e-bd55-b6c986d5205c", 00:15:36.564 "is_configured": true, 00:15:36.564 "data_offset": 0, 00:15:36.564 "data_size": 65536 00:15:36.564 }, 00:15:36.564 { 00:15:36.564 "name": null, 00:15:36.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.564 "is_configured": false, 00:15:36.564 "data_offset": 0, 00:15:36.564 "data_size": 65536 00:15:36.564 }, 00:15:36.564 { 00:15:36.564 "name": "BaseBdev3", 00:15:36.564 "uuid": "726f8497-b506-5df5-92a2-fb97b4db14fa", 00:15:36.564 "is_configured": true, 00:15:36.564 "data_offset": 0, 00:15:36.564 "data_size": 65536 00:15:36.564 }, 00:15:36.564 { 00:15:36.564 "name": "BaseBdev4", 00:15:36.564 "uuid": "7089338f-1d19-56a0-bd27-cc7a8cc3c49e", 00:15:36.564 "is_configured": true, 00:15:36.564 "data_offset": 0, 00:15:36.564 "data_size": 65536 00:15:36.564 } 00:15:36.564 ] 00:15:36.564 }' 00:15:36.564 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.564 14:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.132 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:37.132 14:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.132 14:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.132 [2024-11-20 14:26:15.875623] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:37.132 [2024-11-20 14:26:15.875681] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.132 [2024-11-20 14:26:15.875786] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.132 [2024-11-20 14:26:15.875892] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.132 [2024-11-20 14:26:15.875908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:37.132 14:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.132 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.132 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:37.132 14:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.132 14:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.132 14:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.132 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:37.132 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:37.132 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:37.132 14:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:37.132 14:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:37.132 14:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:37.132 14:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:37.132 14:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:37.132 14:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:37.132 14:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:37.132 14:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:37.132 14:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:37.132 14:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:37.391 /dev/nbd0 00:15:37.391 14:26:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:37.391 14:26:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:37.391 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:37.391 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:37.391 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.391 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.391 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:37.391 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:37.391 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.391 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.391 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.391 1+0 records in 00:15:37.391 1+0 records out 00:15:37.391 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328584 s, 12.5 MB/s 00:15:37.391 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.391 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:37.391 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.391 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:37.391 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:37.391 14:26:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:37.391 14:26:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:37.391 14:26:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:37.651 /dev/nbd1 00:15:37.651 14:26:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:37.651 14:26:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:37.651 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:37.651 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:37.651 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.651 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.651 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:37.651 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:37.651 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.651 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.651 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.651 1+0 records in 00:15:37.651 1+0 records out 00:15:37.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401219 s, 10.2 MB/s 00:15:37.651 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.651 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:37.651 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.651 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:37.651 14:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:37.651 14:26:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:37.651 14:26:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:37.651 14:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:37.910 14:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:37.910 14:26:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:37.910 14:26:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:37.910 14:26:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:37.910 14:26:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:37.910 14:26:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:37.910 14:26:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:38.169 14:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:38.169 14:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:38.169 14:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:38.169 14:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.169 14:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.169 14:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:38.169 14:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:38.169 14:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.169 14:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.169 14:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:38.433 14:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:38.433 14:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:38.433 14:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:38.433 14:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.433 14:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.433 14:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:38.705 14:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:38.705 14:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.705 14:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:38.705 14:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77816 00:15:38.705 14:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77816 ']' 00:15:38.705 14:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77816 00:15:38.705 14:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:38.705 14:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.705 14:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77816 00:15:38.705 killing process with pid 77816 00:15:38.705 Received shutdown signal, test time was about 60.000000 seconds 00:15:38.705 00:15:38.705 Latency(us) 00:15:38.705 [2024-11-20T14:26:17.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.705 [2024-11-20T14:26:17.687Z] =================================================================================================================== 00:15:38.705 [2024-11-20T14:26:17.687Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:38.705 14:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:38.705 14:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:38.705 14:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77816' 00:15:38.705 14:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77816 00:15:38.705 14:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77816 00:15:38.705 [2024-11-20 14:26:17.441802] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:38.965 [2024-11-20 14:26:17.894964] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:40.353 14:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:40.353 00:15:40.353 real 0m21.039s 00:15:40.353 user 0m23.432s 00:15:40.353 sys 0m3.548s 00:15:40.353 14:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.353 14:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.353 ************************************ 00:15:40.353 END TEST raid_rebuild_test 00:15:40.353 ************************************ 00:15:40.353 14:26:19 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:15:40.353 14:26:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:40.353 14:26:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.353 14:26:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:40.353 ************************************ 00:15:40.353 START TEST raid_rebuild_test_sb 00:15:40.354 ************************************ 00:15:40.354 14:26:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:15:40.354 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:40.354 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:40.354 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:40.354 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:40.354 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:40.354 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:40.354 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:40.354 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:40.354 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:40.354 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:40.354 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:40.354 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:40.354 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:40.354 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:40.354 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:40.354 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:40.354 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:40.354 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:40.354 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:40.355 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:40.355 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:40.355 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:40.355 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:40.355 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:40.355 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:40.355 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:40.355 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:40.355 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:40.355 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:40.355 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:40.355 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78302 00:15:40.355 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:40.355 14:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78302 00:15:40.355 14:26:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78302 ']' 00:15:40.355 14:26:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.355 14:26:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:40.355 14:26:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.355 14:26:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:40.355 14:26:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.355 [2024-11-20 14:26:19.155439] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:15:40.355 [2024-11-20 14:26:19.155856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:40.355 Zero copy mechanism will not be used. 00:15:40.356 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78302 ] 00:15:40.620 [2024-11-20 14:26:19.344879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.620 [2024-11-20 14:26:19.525580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.878 [2024-11-20 14:26:19.742496] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:40.878 [2024-11-20 14:26:19.742569] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.445 BaseBdev1_malloc 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.445 [2024-11-20 14:26:20.241569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:41.445 [2024-11-20 14:26:20.241801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.445 [2024-11-20 14:26:20.241878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:41.445 [2024-11-20 14:26:20.242021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.445 [2024-11-20 14:26:20.244847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.445 [2024-11-20 14:26:20.245074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:41.445 BaseBdev1 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.445 BaseBdev2_malloc 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.445 [2024-11-20 14:26:20.296326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:41.445 [2024-11-20 14:26:20.296540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.445 [2024-11-20 14:26:20.296582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:41.445 [2024-11-20 14:26:20.296602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.445 [2024-11-20 14:26:20.299422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.445 BaseBdev2 00:15:41.445 [2024-11-20 14:26:20.299603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.445 BaseBdev3_malloc 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.445 [2024-11-20 14:26:20.363038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:41.445 [2024-11-20 14:26:20.363265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.445 [2024-11-20 14:26:20.363354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:41.445 [2024-11-20 14:26:20.363493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.445 [2024-11-20 14:26:20.366543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.445 [2024-11-20 14:26:20.366736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:41.445 BaseBdev3 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:41.445 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:41.446 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.446 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.446 BaseBdev4_malloc 00:15:41.446 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.446 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:41.446 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.446 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.446 [2024-11-20 14:26:20.419109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:41.446 [2024-11-20 14:26:20.419317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.446 [2024-11-20 14:26:20.419390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:41.446 [2024-11-20 14:26:20.419499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.446 [2024-11-20 14:26:20.422284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.446 [2024-11-20 14:26:20.422491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:41.446 BaseBdev4 00:15:41.446 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.446 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:41.446 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.446 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.705 spare_malloc 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.705 spare_delay 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.705 [2024-11-20 14:26:20.483437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:41.705 [2024-11-20 14:26:20.483644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.705 [2024-11-20 14:26:20.483716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:41.705 [2024-11-20 14:26:20.483823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.705 [2024-11-20 14:26:20.486685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.705 [2024-11-20 14:26:20.486850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:41.705 spare 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.705 [2024-11-20 14:26:20.491603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:41.705 [2024-11-20 14:26:20.494186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.705 [2024-11-20 14:26:20.494390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:41.705 [2024-11-20 14:26:20.494596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:41.705 [2024-11-20 14:26:20.495009] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:41.705 [2024-11-20 14:26:20.495041] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:41.705 [2024-11-20 14:26:20.495395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:41.705 [2024-11-20 14:26:20.495624] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:41.705 [2024-11-20 14:26:20.495641] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:41.705 [2024-11-20 14:26:20.495892] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.705 "name": "raid_bdev1", 00:15:41.705 "uuid": "65f175e8-f163-4252-9f9c-596ffb81b740", 00:15:41.705 "strip_size_kb": 0, 00:15:41.705 "state": "online", 00:15:41.705 "raid_level": "raid1", 00:15:41.705 "superblock": true, 00:15:41.705 "num_base_bdevs": 4, 00:15:41.705 "num_base_bdevs_discovered": 4, 00:15:41.705 "num_base_bdevs_operational": 4, 00:15:41.705 "base_bdevs_list": [ 00:15:41.705 { 00:15:41.705 "name": "BaseBdev1", 00:15:41.705 "uuid": "353efd1f-8371-54c1-817d-6ee6bd040af2", 00:15:41.705 "is_configured": true, 00:15:41.705 "data_offset": 2048, 00:15:41.705 "data_size": 63488 00:15:41.705 }, 00:15:41.705 { 00:15:41.705 "name": "BaseBdev2", 00:15:41.705 "uuid": "2c3da830-6a67-561f-9448-d59333726311", 00:15:41.705 "is_configured": true, 00:15:41.705 "data_offset": 2048, 00:15:41.705 "data_size": 63488 00:15:41.705 }, 00:15:41.705 { 00:15:41.705 "name": "BaseBdev3", 00:15:41.705 "uuid": "4993494f-c0ba-532a-852f-b0954456be49", 00:15:41.705 "is_configured": true, 00:15:41.705 "data_offset": 2048, 00:15:41.705 "data_size": 63488 00:15:41.705 }, 00:15:41.705 { 00:15:41.705 "name": "BaseBdev4", 00:15:41.705 "uuid": "646b0bbe-f307-54f3-8e1a-e673e28d4c2a", 00:15:41.705 "is_configured": true, 00:15:41.705 "data_offset": 2048, 00:15:41.705 "data_size": 63488 00:15:41.705 } 00:15:41.705 ] 00:15:41.705 }' 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.705 14:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.272 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:42.272 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:42.273 14:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.273 14:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.273 [2024-11-20 14:26:21.032498] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:42.273 14:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.273 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:42.273 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.273 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:42.273 14:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.273 14:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.273 14:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.273 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:42.273 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:42.273 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:42.273 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:42.273 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:42.273 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:42.273 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:42.273 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:42.273 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:42.273 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:42.273 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:42.273 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:42.273 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:42.273 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:42.532 [2024-11-20 14:26:21.452881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:42.532 /dev/nbd0 00:15:42.532 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:42.532 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:42.532 14:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:42.532 14:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:42.532 14:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:42.532 14:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:42.532 14:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:42.532 14:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:42.532 14:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:42.532 14:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:42.532 14:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:42.532 1+0 records in 00:15:42.532 1+0 records out 00:15:42.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00071109 s, 5.8 MB/s 00:15:42.532 14:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.790 14:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:42.790 14:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.790 14:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:42.790 14:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:42.790 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:42.790 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:42.790 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:42.790 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:42.790 14:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:50.916 63488+0 records in 00:15:50.916 63488+0 records out 00:15:50.916 32505856 bytes (33 MB, 31 MiB) copied, 8.12753 s, 4.0 MB/s 00:15:50.916 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:50.916 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:50.916 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:50.916 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:50.916 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:50.916 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.916 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:51.175 [2024-11-20 14:26:29.954930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.175 [2024-11-20 14:26:29.991047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.175 14:26:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.175 14:26:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.175 14:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.175 "name": "raid_bdev1", 00:15:51.175 "uuid": "65f175e8-f163-4252-9f9c-596ffb81b740", 00:15:51.175 "strip_size_kb": 0, 00:15:51.175 "state": "online", 00:15:51.175 "raid_level": "raid1", 00:15:51.175 "superblock": true, 00:15:51.175 "num_base_bdevs": 4, 00:15:51.175 "num_base_bdevs_discovered": 3, 00:15:51.175 "num_base_bdevs_operational": 3, 00:15:51.175 "base_bdevs_list": [ 00:15:51.175 { 00:15:51.175 "name": null, 00:15:51.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.175 "is_configured": false, 00:15:51.175 "data_offset": 0, 00:15:51.175 "data_size": 63488 00:15:51.175 }, 00:15:51.175 { 00:15:51.175 "name": "BaseBdev2", 00:15:51.175 "uuid": "2c3da830-6a67-561f-9448-d59333726311", 00:15:51.175 "is_configured": true, 00:15:51.175 "data_offset": 2048, 00:15:51.175 "data_size": 63488 00:15:51.175 }, 00:15:51.175 { 00:15:51.175 "name": "BaseBdev3", 00:15:51.175 "uuid": "4993494f-c0ba-532a-852f-b0954456be49", 00:15:51.175 "is_configured": true, 00:15:51.175 "data_offset": 2048, 00:15:51.175 "data_size": 63488 00:15:51.175 }, 00:15:51.175 { 00:15:51.175 "name": "BaseBdev4", 00:15:51.175 "uuid": "646b0bbe-f307-54f3-8e1a-e673e28d4c2a", 00:15:51.175 "is_configured": true, 00:15:51.175 "data_offset": 2048, 00:15:51.175 "data_size": 63488 00:15:51.175 } 00:15:51.175 ] 00:15:51.175 }' 00:15:51.175 14:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.175 14:26:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.744 14:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:51.744 14:26:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.744 14:26:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.744 [2024-11-20 14:26:30.519221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:51.744 [2024-11-20 14:26:30.533870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:15:51.744 14:26:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.744 14:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:51.744 [2024-11-20 14:26:30.536503] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:52.681 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:52.681 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.681 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:52.681 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:52.681 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.681 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.681 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.681 14:26:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.681 14:26:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.681 14:26:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.681 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.681 "name": "raid_bdev1", 00:15:52.681 "uuid": "65f175e8-f163-4252-9f9c-596ffb81b740", 00:15:52.681 "strip_size_kb": 0, 00:15:52.681 "state": "online", 00:15:52.681 "raid_level": "raid1", 00:15:52.681 "superblock": true, 00:15:52.681 "num_base_bdevs": 4, 00:15:52.681 "num_base_bdevs_discovered": 4, 00:15:52.681 "num_base_bdevs_operational": 4, 00:15:52.681 "process": { 00:15:52.681 "type": "rebuild", 00:15:52.681 "target": "spare", 00:15:52.681 "progress": { 00:15:52.681 "blocks": 20480, 00:15:52.681 "percent": 32 00:15:52.681 } 00:15:52.681 }, 00:15:52.681 "base_bdevs_list": [ 00:15:52.681 { 00:15:52.681 "name": "spare", 00:15:52.681 "uuid": "89ea3881-b224-5605-90b9-d234d4be48e2", 00:15:52.681 "is_configured": true, 00:15:52.681 "data_offset": 2048, 00:15:52.681 "data_size": 63488 00:15:52.681 }, 00:15:52.681 { 00:15:52.681 "name": "BaseBdev2", 00:15:52.681 "uuid": "2c3da830-6a67-561f-9448-d59333726311", 00:15:52.681 "is_configured": true, 00:15:52.681 "data_offset": 2048, 00:15:52.681 "data_size": 63488 00:15:52.681 }, 00:15:52.681 { 00:15:52.681 "name": "BaseBdev3", 00:15:52.681 "uuid": "4993494f-c0ba-532a-852f-b0954456be49", 00:15:52.681 "is_configured": true, 00:15:52.681 "data_offset": 2048, 00:15:52.681 "data_size": 63488 00:15:52.681 }, 00:15:52.681 { 00:15:52.681 "name": "BaseBdev4", 00:15:52.681 "uuid": "646b0bbe-f307-54f3-8e1a-e673e28d4c2a", 00:15:52.681 "is_configured": true, 00:15:52.681 "data_offset": 2048, 00:15:52.681 "data_size": 63488 00:15:52.681 } 00:15:52.681 ] 00:15:52.681 }' 00:15:52.681 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.681 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:52.681 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.940 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:52.940 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:52.940 14:26:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.940 14:26:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.940 [2024-11-20 14:26:31.706143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:52.940 [2024-11-20 14:26:31.746162] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:52.940 [2024-11-20 14:26:31.746283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.940 [2024-11-20 14:26:31.746310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:52.940 [2024-11-20 14:26:31.746325] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:52.940 14:26:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.940 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:52.940 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.940 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.940 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.940 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.940 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:52.940 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.940 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.940 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.940 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.940 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.940 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.940 14:26:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.940 14:26:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.940 14:26:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.940 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.940 "name": "raid_bdev1", 00:15:52.940 "uuid": "65f175e8-f163-4252-9f9c-596ffb81b740", 00:15:52.940 "strip_size_kb": 0, 00:15:52.940 "state": "online", 00:15:52.940 "raid_level": "raid1", 00:15:52.940 "superblock": true, 00:15:52.940 "num_base_bdevs": 4, 00:15:52.940 "num_base_bdevs_discovered": 3, 00:15:52.941 "num_base_bdevs_operational": 3, 00:15:52.941 "base_bdevs_list": [ 00:15:52.941 { 00:15:52.941 "name": null, 00:15:52.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.941 "is_configured": false, 00:15:52.941 "data_offset": 0, 00:15:52.941 "data_size": 63488 00:15:52.941 }, 00:15:52.941 { 00:15:52.941 "name": "BaseBdev2", 00:15:52.941 "uuid": "2c3da830-6a67-561f-9448-d59333726311", 00:15:52.941 "is_configured": true, 00:15:52.941 "data_offset": 2048, 00:15:52.941 "data_size": 63488 00:15:52.941 }, 00:15:52.941 { 00:15:52.941 "name": "BaseBdev3", 00:15:52.941 "uuid": "4993494f-c0ba-532a-852f-b0954456be49", 00:15:52.941 "is_configured": true, 00:15:52.941 "data_offset": 2048, 00:15:52.941 "data_size": 63488 00:15:52.941 }, 00:15:52.941 { 00:15:52.941 "name": "BaseBdev4", 00:15:52.941 "uuid": "646b0bbe-f307-54f3-8e1a-e673e28d4c2a", 00:15:52.941 "is_configured": true, 00:15:52.941 "data_offset": 2048, 00:15:52.941 "data_size": 63488 00:15:52.941 } 00:15:52.941 ] 00:15:52.941 }' 00:15:52.941 14:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.941 14:26:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.508 14:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:53.508 14:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.508 14:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:53.508 14:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:53.508 14:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.508 14:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.508 14:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.508 14:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.508 14:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.508 14:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.508 14:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.508 "name": "raid_bdev1", 00:15:53.508 "uuid": "65f175e8-f163-4252-9f9c-596ffb81b740", 00:15:53.508 "strip_size_kb": 0, 00:15:53.508 "state": "online", 00:15:53.508 "raid_level": "raid1", 00:15:53.508 "superblock": true, 00:15:53.508 "num_base_bdevs": 4, 00:15:53.508 "num_base_bdevs_discovered": 3, 00:15:53.508 "num_base_bdevs_operational": 3, 00:15:53.508 "base_bdevs_list": [ 00:15:53.508 { 00:15:53.508 "name": null, 00:15:53.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.508 "is_configured": false, 00:15:53.508 "data_offset": 0, 00:15:53.508 "data_size": 63488 00:15:53.508 }, 00:15:53.508 { 00:15:53.508 "name": "BaseBdev2", 00:15:53.508 "uuid": "2c3da830-6a67-561f-9448-d59333726311", 00:15:53.508 "is_configured": true, 00:15:53.508 "data_offset": 2048, 00:15:53.508 "data_size": 63488 00:15:53.508 }, 00:15:53.508 { 00:15:53.508 "name": "BaseBdev3", 00:15:53.509 "uuid": "4993494f-c0ba-532a-852f-b0954456be49", 00:15:53.509 "is_configured": true, 00:15:53.509 "data_offset": 2048, 00:15:53.509 "data_size": 63488 00:15:53.509 }, 00:15:53.509 { 00:15:53.509 "name": "BaseBdev4", 00:15:53.509 "uuid": "646b0bbe-f307-54f3-8e1a-e673e28d4c2a", 00:15:53.509 "is_configured": true, 00:15:53.509 "data_offset": 2048, 00:15:53.509 "data_size": 63488 00:15:53.509 } 00:15:53.509 ] 00:15:53.509 }' 00:15:53.509 14:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.509 14:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:53.509 14:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.509 14:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:53.509 14:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:53.509 14:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.509 14:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.509 [2024-11-20 14:26:32.471557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:53.509 [2024-11-20 14:26:32.485069] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:15:53.509 14:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.509 14:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:53.509 [2024-11-20 14:26:32.487685] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.885 "name": "raid_bdev1", 00:15:54.885 "uuid": "65f175e8-f163-4252-9f9c-596ffb81b740", 00:15:54.885 "strip_size_kb": 0, 00:15:54.885 "state": "online", 00:15:54.885 "raid_level": "raid1", 00:15:54.885 "superblock": true, 00:15:54.885 "num_base_bdevs": 4, 00:15:54.885 "num_base_bdevs_discovered": 4, 00:15:54.885 "num_base_bdevs_operational": 4, 00:15:54.885 "process": { 00:15:54.885 "type": "rebuild", 00:15:54.885 "target": "spare", 00:15:54.885 "progress": { 00:15:54.885 "blocks": 20480, 00:15:54.885 "percent": 32 00:15:54.885 } 00:15:54.885 }, 00:15:54.885 "base_bdevs_list": [ 00:15:54.885 { 00:15:54.885 "name": "spare", 00:15:54.885 "uuid": "89ea3881-b224-5605-90b9-d234d4be48e2", 00:15:54.885 "is_configured": true, 00:15:54.885 "data_offset": 2048, 00:15:54.885 "data_size": 63488 00:15:54.885 }, 00:15:54.885 { 00:15:54.885 "name": "BaseBdev2", 00:15:54.885 "uuid": "2c3da830-6a67-561f-9448-d59333726311", 00:15:54.885 "is_configured": true, 00:15:54.885 "data_offset": 2048, 00:15:54.885 "data_size": 63488 00:15:54.885 }, 00:15:54.885 { 00:15:54.885 "name": "BaseBdev3", 00:15:54.885 "uuid": "4993494f-c0ba-532a-852f-b0954456be49", 00:15:54.885 "is_configured": true, 00:15:54.885 "data_offset": 2048, 00:15:54.885 "data_size": 63488 00:15:54.885 }, 00:15:54.885 { 00:15:54.885 "name": "BaseBdev4", 00:15:54.885 "uuid": "646b0bbe-f307-54f3-8e1a-e673e28d4c2a", 00:15:54.885 "is_configured": true, 00:15:54.885 "data_offset": 2048, 00:15:54.885 "data_size": 63488 00:15:54.885 } 00:15:54.885 ] 00:15:54.885 }' 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:54.885 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.885 [2024-11-20 14:26:33.661131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:54.885 [2024-11-20 14:26:33.797282] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.885 14:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.144 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.144 "name": "raid_bdev1", 00:15:55.144 "uuid": "65f175e8-f163-4252-9f9c-596ffb81b740", 00:15:55.144 "strip_size_kb": 0, 00:15:55.144 "state": "online", 00:15:55.144 "raid_level": "raid1", 00:15:55.144 "superblock": true, 00:15:55.144 "num_base_bdevs": 4, 00:15:55.144 "num_base_bdevs_discovered": 3, 00:15:55.144 "num_base_bdevs_operational": 3, 00:15:55.144 "process": { 00:15:55.144 "type": "rebuild", 00:15:55.144 "target": "spare", 00:15:55.144 "progress": { 00:15:55.144 "blocks": 24576, 00:15:55.144 "percent": 38 00:15:55.144 } 00:15:55.144 }, 00:15:55.144 "base_bdevs_list": [ 00:15:55.144 { 00:15:55.144 "name": "spare", 00:15:55.144 "uuid": "89ea3881-b224-5605-90b9-d234d4be48e2", 00:15:55.144 "is_configured": true, 00:15:55.144 "data_offset": 2048, 00:15:55.144 "data_size": 63488 00:15:55.144 }, 00:15:55.144 { 00:15:55.144 "name": null, 00:15:55.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.144 "is_configured": false, 00:15:55.144 "data_offset": 0, 00:15:55.144 "data_size": 63488 00:15:55.144 }, 00:15:55.144 { 00:15:55.144 "name": "BaseBdev3", 00:15:55.144 "uuid": "4993494f-c0ba-532a-852f-b0954456be49", 00:15:55.144 "is_configured": true, 00:15:55.144 "data_offset": 2048, 00:15:55.144 "data_size": 63488 00:15:55.144 }, 00:15:55.144 { 00:15:55.144 "name": "BaseBdev4", 00:15:55.144 "uuid": "646b0bbe-f307-54f3-8e1a-e673e28d4c2a", 00:15:55.144 "is_configured": true, 00:15:55.144 "data_offset": 2048, 00:15:55.144 "data_size": 63488 00:15:55.144 } 00:15:55.144 ] 00:15:55.144 }' 00:15:55.144 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.144 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:55.144 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.144 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.144 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=500 00:15:55.144 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:55.144 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.144 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.144 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.144 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.144 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.144 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.144 14:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.144 14:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.144 14:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.144 14:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.144 14:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.144 "name": "raid_bdev1", 00:15:55.144 "uuid": "65f175e8-f163-4252-9f9c-596ffb81b740", 00:15:55.144 "strip_size_kb": 0, 00:15:55.144 "state": "online", 00:15:55.144 "raid_level": "raid1", 00:15:55.144 "superblock": true, 00:15:55.144 "num_base_bdevs": 4, 00:15:55.144 "num_base_bdevs_discovered": 3, 00:15:55.144 "num_base_bdevs_operational": 3, 00:15:55.144 "process": { 00:15:55.144 "type": "rebuild", 00:15:55.144 "target": "spare", 00:15:55.144 "progress": { 00:15:55.144 "blocks": 26624, 00:15:55.144 "percent": 41 00:15:55.144 } 00:15:55.144 }, 00:15:55.144 "base_bdevs_list": [ 00:15:55.144 { 00:15:55.144 "name": "spare", 00:15:55.144 "uuid": "89ea3881-b224-5605-90b9-d234d4be48e2", 00:15:55.144 "is_configured": true, 00:15:55.144 "data_offset": 2048, 00:15:55.144 "data_size": 63488 00:15:55.144 }, 00:15:55.144 { 00:15:55.144 "name": null, 00:15:55.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.144 "is_configured": false, 00:15:55.144 "data_offset": 0, 00:15:55.144 "data_size": 63488 00:15:55.144 }, 00:15:55.144 { 00:15:55.144 "name": "BaseBdev3", 00:15:55.144 "uuid": "4993494f-c0ba-532a-852f-b0954456be49", 00:15:55.144 "is_configured": true, 00:15:55.144 "data_offset": 2048, 00:15:55.144 "data_size": 63488 00:15:55.144 }, 00:15:55.144 { 00:15:55.144 "name": "BaseBdev4", 00:15:55.144 "uuid": "646b0bbe-f307-54f3-8e1a-e673e28d4c2a", 00:15:55.144 "is_configured": true, 00:15:55.144 "data_offset": 2048, 00:15:55.144 "data_size": 63488 00:15:55.144 } 00:15:55.144 ] 00:15:55.144 }' 00:15:55.144 14:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.144 14:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:55.144 14:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.403 14:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.403 14:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:56.436 14:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:56.436 14:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:56.436 14:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.436 14:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:56.436 14:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:56.436 14:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.436 14:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.436 14:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.436 14:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.436 14:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.436 14:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.436 14:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.436 "name": "raid_bdev1", 00:15:56.436 "uuid": "65f175e8-f163-4252-9f9c-596ffb81b740", 00:15:56.436 "strip_size_kb": 0, 00:15:56.436 "state": "online", 00:15:56.436 "raid_level": "raid1", 00:15:56.436 "superblock": true, 00:15:56.436 "num_base_bdevs": 4, 00:15:56.436 "num_base_bdevs_discovered": 3, 00:15:56.436 "num_base_bdevs_operational": 3, 00:15:56.436 "process": { 00:15:56.436 "type": "rebuild", 00:15:56.436 "target": "spare", 00:15:56.436 "progress": { 00:15:56.436 "blocks": 53248, 00:15:56.436 "percent": 83 00:15:56.436 } 00:15:56.436 }, 00:15:56.436 "base_bdevs_list": [ 00:15:56.436 { 00:15:56.436 "name": "spare", 00:15:56.436 "uuid": "89ea3881-b224-5605-90b9-d234d4be48e2", 00:15:56.436 "is_configured": true, 00:15:56.436 "data_offset": 2048, 00:15:56.436 "data_size": 63488 00:15:56.436 }, 00:15:56.436 { 00:15:56.436 "name": null, 00:15:56.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.436 "is_configured": false, 00:15:56.436 "data_offset": 0, 00:15:56.436 "data_size": 63488 00:15:56.436 }, 00:15:56.436 { 00:15:56.436 "name": "BaseBdev3", 00:15:56.436 "uuid": "4993494f-c0ba-532a-852f-b0954456be49", 00:15:56.436 "is_configured": true, 00:15:56.436 "data_offset": 2048, 00:15:56.436 "data_size": 63488 00:15:56.436 }, 00:15:56.436 { 00:15:56.436 "name": "BaseBdev4", 00:15:56.436 "uuid": "646b0bbe-f307-54f3-8e1a-e673e28d4c2a", 00:15:56.436 "is_configured": true, 00:15:56.436 "data_offset": 2048, 00:15:56.436 "data_size": 63488 00:15:56.436 } 00:15:56.436 ] 00:15:56.436 }' 00:15:56.436 14:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.436 14:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.436 14:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.436 14:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.436 14:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:57.002 [2024-11-20 14:26:35.711436] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:57.002 [2024-11-20 14:26:35.711524] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:57.002 [2024-11-20 14:26:35.711691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.569 "name": "raid_bdev1", 00:15:57.569 "uuid": "65f175e8-f163-4252-9f9c-596ffb81b740", 00:15:57.569 "strip_size_kb": 0, 00:15:57.569 "state": "online", 00:15:57.569 "raid_level": "raid1", 00:15:57.569 "superblock": true, 00:15:57.569 "num_base_bdevs": 4, 00:15:57.569 "num_base_bdevs_discovered": 3, 00:15:57.569 "num_base_bdevs_operational": 3, 00:15:57.569 "base_bdevs_list": [ 00:15:57.569 { 00:15:57.569 "name": "spare", 00:15:57.569 "uuid": "89ea3881-b224-5605-90b9-d234d4be48e2", 00:15:57.569 "is_configured": true, 00:15:57.569 "data_offset": 2048, 00:15:57.569 "data_size": 63488 00:15:57.569 }, 00:15:57.569 { 00:15:57.569 "name": null, 00:15:57.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.569 "is_configured": false, 00:15:57.569 "data_offset": 0, 00:15:57.569 "data_size": 63488 00:15:57.569 }, 00:15:57.569 { 00:15:57.569 "name": "BaseBdev3", 00:15:57.569 "uuid": "4993494f-c0ba-532a-852f-b0954456be49", 00:15:57.569 "is_configured": true, 00:15:57.569 "data_offset": 2048, 00:15:57.569 "data_size": 63488 00:15:57.569 }, 00:15:57.569 { 00:15:57.569 "name": "BaseBdev4", 00:15:57.569 "uuid": "646b0bbe-f307-54f3-8e1a-e673e28d4c2a", 00:15:57.569 "is_configured": true, 00:15:57.569 "data_offset": 2048, 00:15:57.569 "data_size": 63488 00:15:57.569 } 00:15:57.569 ] 00:15:57.569 }' 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.569 14:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.828 14:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.828 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.828 "name": "raid_bdev1", 00:15:57.828 "uuid": "65f175e8-f163-4252-9f9c-596ffb81b740", 00:15:57.828 "strip_size_kb": 0, 00:15:57.828 "state": "online", 00:15:57.828 "raid_level": "raid1", 00:15:57.828 "superblock": true, 00:15:57.828 "num_base_bdevs": 4, 00:15:57.828 "num_base_bdevs_discovered": 3, 00:15:57.828 "num_base_bdevs_operational": 3, 00:15:57.828 "base_bdevs_list": [ 00:15:57.828 { 00:15:57.828 "name": "spare", 00:15:57.828 "uuid": "89ea3881-b224-5605-90b9-d234d4be48e2", 00:15:57.828 "is_configured": true, 00:15:57.828 "data_offset": 2048, 00:15:57.828 "data_size": 63488 00:15:57.828 }, 00:15:57.828 { 00:15:57.828 "name": null, 00:15:57.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.828 "is_configured": false, 00:15:57.828 "data_offset": 0, 00:15:57.828 "data_size": 63488 00:15:57.828 }, 00:15:57.828 { 00:15:57.828 "name": "BaseBdev3", 00:15:57.828 "uuid": "4993494f-c0ba-532a-852f-b0954456be49", 00:15:57.828 "is_configured": true, 00:15:57.828 "data_offset": 2048, 00:15:57.828 "data_size": 63488 00:15:57.828 }, 00:15:57.828 { 00:15:57.828 "name": "BaseBdev4", 00:15:57.828 "uuid": "646b0bbe-f307-54f3-8e1a-e673e28d4c2a", 00:15:57.828 "is_configured": true, 00:15:57.828 "data_offset": 2048, 00:15:57.828 "data_size": 63488 00:15:57.828 } 00:15:57.828 ] 00:15:57.828 }' 00:15:57.829 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.829 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:57.829 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.829 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:57.829 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:57.829 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.829 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.829 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.829 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.829 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.829 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.829 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.829 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.829 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.829 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.829 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.829 14:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.829 14:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.829 14:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.829 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.829 "name": "raid_bdev1", 00:15:57.829 "uuid": "65f175e8-f163-4252-9f9c-596ffb81b740", 00:15:57.829 "strip_size_kb": 0, 00:15:57.829 "state": "online", 00:15:57.829 "raid_level": "raid1", 00:15:57.829 "superblock": true, 00:15:57.829 "num_base_bdevs": 4, 00:15:57.829 "num_base_bdevs_discovered": 3, 00:15:57.829 "num_base_bdevs_operational": 3, 00:15:57.829 "base_bdevs_list": [ 00:15:57.829 { 00:15:57.829 "name": "spare", 00:15:57.829 "uuid": "89ea3881-b224-5605-90b9-d234d4be48e2", 00:15:57.829 "is_configured": true, 00:15:57.829 "data_offset": 2048, 00:15:57.829 "data_size": 63488 00:15:57.829 }, 00:15:57.829 { 00:15:57.829 "name": null, 00:15:57.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.829 "is_configured": false, 00:15:57.829 "data_offset": 0, 00:15:57.829 "data_size": 63488 00:15:57.829 }, 00:15:57.829 { 00:15:57.829 "name": "BaseBdev3", 00:15:57.829 "uuid": "4993494f-c0ba-532a-852f-b0954456be49", 00:15:57.829 "is_configured": true, 00:15:57.829 "data_offset": 2048, 00:15:57.829 "data_size": 63488 00:15:57.829 }, 00:15:57.829 { 00:15:57.829 "name": "BaseBdev4", 00:15:57.829 "uuid": "646b0bbe-f307-54f3-8e1a-e673e28d4c2a", 00:15:57.829 "is_configured": true, 00:15:57.829 "data_offset": 2048, 00:15:57.829 "data_size": 63488 00:15:57.829 } 00:15:57.829 ] 00:15:57.829 }' 00:15:57.829 14:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.829 14:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.398 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:58.398 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.398 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.398 [2024-11-20 14:26:37.215289] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.398 [2024-11-20 14:26:37.215332] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.398 [2024-11-20 14:26:37.215445] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.398 [2024-11-20 14:26:37.215548] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.398 [2024-11-20 14:26:37.215564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:58.398 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.398 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:58.398 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.398 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.398 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.398 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.398 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:58.398 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:58.398 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:58.398 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:58.398 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:58.398 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:58.398 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:58.398 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:58.398 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:58.398 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:58.398 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:58.398 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:58.398 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:58.657 /dev/nbd0 00:15:58.657 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:58.657 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:58.657 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:58.657 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:58.657 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.657 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.657 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:58.657 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:58.657 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.657 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.657 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.657 1+0 records in 00:15:58.657 1+0 records out 00:15:58.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048053 s, 8.5 MB/s 00:15:58.657 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.657 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:58.657 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.657 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.657 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:58.657 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:58.658 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:58.658 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:59.225 /dev/nbd1 00:15:59.225 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:59.225 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:59.225 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:59.225 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:59.225 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:59.225 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:59.225 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:59.225 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:59.225 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:59.225 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:59.225 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:59.225 1+0 records in 00:15:59.225 1+0 records out 00:15:59.225 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387574 s, 10.6 MB/s 00:15:59.225 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.225 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:59.225 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.225 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:59.225 14:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:59.225 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:59.225 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:59.225 14:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:59.225 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:59.225 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:59.225 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:59.225 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:59.225 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:59.225 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.225 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:59.485 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:59.485 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:59.485 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:59.485 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.485 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.485 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:59.485 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:59.485 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.485 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.485 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:00.051 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:00.051 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:00.051 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:00.051 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.051 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.051 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:00.051 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:00.051 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.051 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:00.051 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:00.051 14:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.051 14:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.051 14:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.051 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:00.051 14:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.052 14:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.052 [2024-11-20 14:26:38.820832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:00.052 [2024-11-20 14:26:38.821039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.052 [2024-11-20 14:26:38.821085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:00.052 [2024-11-20 14:26:38.821101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.052 [2024-11-20 14:26:38.823879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.052 [2024-11-20 14:26:38.823921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:00.052 [2024-11-20 14:26:38.824046] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:00.052 [2024-11-20 14:26:38.824110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:00.052 [2024-11-20 14:26:38.824295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:00.052 [2024-11-20 14:26:38.824437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:00.052 spare 00:16:00.052 14:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.052 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:00.052 14:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.052 14:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.052 [2024-11-20 14:26:38.924546] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:00.052 [2024-11-20 14:26:38.924573] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:00.052 [2024-11-20 14:26:38.924894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:16:00.052 [2024-11-20 14:26:38.925141] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:00.052 [2024-11-20 14:26:38.925179] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:00.052 [2024-11-20 14:26:38.925420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.052 14:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.052 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:00.052 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.052 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.052 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.052 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.052 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:00.052 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.052 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.052 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.052 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.052 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.052 14:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.052 14:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.052 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.052 14:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.052 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.052 "name": "raid_bdev1", 00:16:00.052 "uuid": "65f175e8-f163-4252-9f9c-596ffb81b740", 00:16:00.052 "strip_size_kb": 0, 00:16:00.052 "state": "online", 00:16:00.052 "raid_level": "raid1", 00:16:00.052 "superblock": true, 00:16:00.052 "num_base_bdevs": 4, 00:16:00.052 "num_base_bdevs_discovered": 3, 00:16:00.052 "num_base_bdevs_operational": 3, 00:16:00.052 "base_bdevs_list": [ 00:16:00.052 { 00:16:00.052 "name": "spare", 00:16:00.052 "uuid": "89ea3881-b224-5605-90b9-d234d4be48e2", 00:16:00.052 "is_configured": true, 00:16:00.052 "data_offset": 2048, 00:16:00.052 "data_size": 63488 00:16:00.052 }, 00:16:00.052 { 00:16:00.052 "name": null, 00:16:00.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.052 "is_configured": false, 00:16:00.052 "data_offset": 2048, 00:16:00.052 "data_size": 63488 00:16:00.052 }, 00:16:00.052 { 00:16:00.052 "name": "BaseBdev3", 00:16:00.052 "uuid": "4993494f-c0ba-532a-852f-b0954456be49", 00:16:00.052 "is_configured": true, 00:16:00.052 "data_offset": 2048, 00:16:00.052 "data_size": 63488 00:16:00.052 }, 00:16:00.052 { 00:16:00.052 "name": "BaseBdev4", 00:16:00.052 "uuid": "646b0bbe-f307-54f3-8e1a-e673e28d4c2a", 00:16:00.052 "is_configured": true, 00:16:00.052 "data_offset": 2048, 00:16:00.052 "data_size": 63488 00:16:00.052 } 00:16:00.052 ] 00:16:00.052 }' 00:16:00.052 14:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.052 14:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.619 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:00.619 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.619 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:00.619 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:00.619 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.619 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.619 14:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.619 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.619 14:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.619 14:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.619 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.619 "name": "raid_bdev1", 00:16:00.619 "uuid": "65f175e8-f163-4252-9f9c-596ffb81b740", 00:16:00.619 "strip_size_kb": 0, 00:16:00.619 "state": "online", 00:16:00.619 "raid_level": "raid1", 00:16:00.619 "superblock": true, 00:16:00.619 "num_base_bdevs": 4, 00:16:00.619 "num_base_bdevs_discovered": 3, 00:16:00.619 "num_base_bdevs_operational": 3, 00:16:00.619 "base_bdevs_list": [ 00:16:00.619 { 00:16:00.619 "name": "spare", 00:16:00.619 "uuid": "89ea3881-b224-5605-90b9-d234d4be48e2", 00:16:00.619 "is_configured": true, 00:16:00.619 "data_offset": 2048, 00:16:00.619 "data_size": 63488 00:16:00.619 }, 00:16:00.619 { 00:16:00.619 "name": null, 00:16:00.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.619 "is_configured": false, 00:16:00.619 "data_offset": 2048, 00:16:00.619 "data_size": 63488 00:16:00.619 }, 00:16:00.619 { 00:16:00.619 "name": "BaseBdev3", 00:16:00.619 "uuid": "4993494f-c0ba-532a-852f-b0954456be49", 00:16:00.619 "is_configured": true, 00:16:00.619 "data_offset": 2048, 00:16:00.619 "data_size": 63488 00:16:00.619 }, 00:16:00.619 { 00:16:00.619 "name": "BaseBdev4", 00:16:00.619 "uuid": "646b0bbe-f307-54f3-8e1a-e673e28d4c2a", 00:16:00.619 "is_configured": true, 00:16:00.619 "data_offset": 2048, 00:16:00.619 "data_size": 63488 00:16:00.619 } 00:16:00.619 ] 00:16:00.619 }' 00:16:00.619 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.619 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:00.619 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.619 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:00.619 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:00.619 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.619 14:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.620 14:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.878 14:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.878 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.878 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:00.878 14:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.878 14:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.878 [2024-11-20 14:26:39.645649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:00.878 14:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.878 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:00.878 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.878 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.878 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.878 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.878 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:00.878 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.878 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.878 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.878 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.878 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.878 14:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.878 14:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.878 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.878 14:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.878 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.878 "name": "raid_bdev1", 00:16:00.878 "uuid": "65f175e8-f163-4252-9f9c-596ffb81b740", 00:16:00.878 "strip_size_kb": 0, 00:16:00.878 "state": "online", 00:16:00.878 "raid_level": "raid1", 00:16:00.878 "superblock": true, 00:16:00.878 "num_base_bdevs": 4, 00:16:00.878 "num_base_bdevs_discovered": 2, 00:16:00.878 "num_base_bdevs_operational": 2, 00:16:00.878 "base_bdevs_list": [ 00:16:00.878 { 00:16:00.878 "name": null, 00:16:00.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.878 "is_configured": false, 00:16:00.878 "data_offset": 0, 00:16:00.878 "data_size": 63488 00:16:00.878 }, 00:16:00.878 { 00:16:00.878 "name": null, 00:16:00.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.878 "is_configured": false, 00:16:00.878 "data_offset": 2048, 00:16:00.878 "data_size": 63488 00:16:00.878 }, 00:16:00.878 { 00:16:00.878 "name": "BaseBdev3", 00:16:00.878 "uuid": "4993494f-c0ba-532a-852f-b0954456be49", 00:16:00.878 "is_configured": true, 00:16:00.878 "data_offset": 2048, 00:16:00.878 "data_size": 63488 00:16:00.878 }, 00:16:00.878 { 00:16:00.878 "name": "BaseBdev4", 00:16:00.878 "uuid": "646b0bbe-f307-54f3-8e1a-e673e28d4c2a", 00:16:00.878 "is_configured": true, 00:16:00.878 "data_offset": 2048, 00:16:00.878 "data_size": 63488 00:16:00.878 } 00:16:00.878 ] 00:16:00.878 }' 00:16:00.878 14:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.878 14:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.445 14:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:01.445 14:26:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.445 14:26:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.445 [2024-11-20 14:26:40.153892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:01.445 [2024-11-20 14:26:40.154345] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:01.445 [2024-11-20 14:26:40.154374] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:01.445 [2024-11-20 14:26:40.154425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:01.445 [2024-11-20 14:26:40.167869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:16:01.445 14:26:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.445 14:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:01.445 [2024-11-20 14:26:40.170331] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:02.382 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.382 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.382 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.382 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.382 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.382 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.382 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.382 14:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.382 14:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.382 14:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.382 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.382 "name": "raid_bdev1", 00:16:02.382 "uuid": "65f175e8-f163-4252-9f9c-596ffb81b740", 00:16:02.382 "strip_size_kb": 0, 00:16:02.382 "state": "online", 00:16:02.382 "raid_level": "raid1", 00:16:02.382 "superblock": true, 00:16:02.382 "num_base_bdevs": 4, 00:16:02.382 "num_base_bdevs_discovered": 3, 00:16:02.382 "num_base_bdevs_operational": 3, 00:16:02.382 "process": { 00:16:02.382 "type": "rebuild", 00:16:02.382 "target": "spare", 00:16:02.382 "progress": { 00:16:02.382 "blocks": 20480, 00:16:02.382 "percent": 32 00:16:02.382 } 00:16:02.382 }, 00:16:02.382 "base_bdevs_list": [ 00:16:02.382 { 00:16:02.382 "name": "spare", 00:16:02.382 "uuid": "89ea3881-b224-5605-90b9-d234d4be48e2", 00:16:02.382 "is_configured": true, 00:16:02.382 "data_offset": 2048, 00:16:02.382 "data_size": 63488 00:16:02.382 }, 00:16:02.382 { 00:16:02.382 "name": null, 00:16:02.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.382 "is_configured": false, 00:16:02.382 "data_offset": 2048, 00:16:02.382 "data_size": 63488 00:16:02.382 }, 00:16:02.382 { 00:16:02.382 "name": "BaseBdev3", 00:16:02.382 "uuid": "4993494f-c0ba-532a-852f-b0954456be49", 00:16:02.382 "is_configured": true, 00:16:02.382 "data_offset": 2048, 00:16:02.382 "data_size": 63488 00:16:02.382 }, 00:16:02.382 { 00:16:02.382 "name": "BaseBdev4", 00:16:02.382 "uuid": "646b0bbe-f307-54f3-8e1a-e673e28d4c2a", 00:16:02.382 "is_configured": true, 00:16:02.382 "data_offset": 2048, 00:16:02.382 "data_size": 63488 00:16:02.382 } 00:16:02.382 ] 00:16:02.382 }' 00:16:02.382 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.382 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.382 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.382 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.382 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:02.382 14:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.382 14:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.382 [2024-11-20 14:26:41.335991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:02.642 [2024-11-20 14:26:41.379409] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:02.642 [2024-11-20 14:26:41.379640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.642 [2024-11-20 14:26:41.379691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:02.642 [2024-11-20 14:26:41.379702] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:02.642 14:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.642 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:02.642 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.642 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.642 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.642 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.642 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:02.642 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.642 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.642 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.642 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.642 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.642 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.642 14:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.642 14:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.642 14:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.642 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.642 "name": "raid_bdev1", 00:16:02.642 "uuid": "65f175e8-f163-4252-9f9c-596ffb81b740", 00:16:02.642 "strip_size_kb": 0, 00:16:02.642 "state": "online", 00:16:02.642 "raid_level": "raid1", 00:16:02.642 "superblock": true, 00:16:02.642 "num_base_bdevs": 4, 00:16:02.642 "num_base_bdevs_discovered": 2, 00:16:02.642 "num_base_bdevs_operational": 2, 00:16:02.642 "base_bdevs_list": [ 00:16:02.642 { 00:16:02.642 "name": null, 00:16:02.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.642 "is_configured": false, 00:16:02.642 "data_offset": 0, 00:16:02.642 "data_size": 63488 00:16:02.642 }, 00:16:02.642 { 00:16:02.642 "name": null, 00:16:02.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.642 "is_configured": false, 00:16:02.642 "data_offset": 2048, 00:16:02.642 "data_size": 63488 00:16:02.642 }, 00:16:02.642 { 00:16:02.642 "name": "BaseBdev3", 00:16:02.642 "uuid": "4993494f-c0ba-532a-852f-b0954456be49", 00:16:02.642 "is_configured": true, 00:16:02.642 "data_offset": 2048, 00:16:02.642 "data_size": 63488 00:16:02.642 }, 00:16:02.642 { 00:16:02.642 "name": "BaseBdev4", 00:16:02.642 "uuid": "646b0bbe-f307-54f3-8e1a-e673e28d4c2a", 00:16:02.642 "is_configured": true, 00:16:02.642 "data_offset": 2048, 00:16:02.642 "data_size": 63488 00:16:02.642 } 00:16:02.642 ] 00:16:02.642 }' 00:16:02.642 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.642 14:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.210 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:03.210 14:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.210 14:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.210 [2024-11-20 14:26:41.933284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:03.210 [2024-11-20 14:26:41.933577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.211 [2024-11-20 14:26:41.933634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:03.211 [2024-11-20 14:26:41.933651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.211 [2024-11-20 14:26:41.934287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.211 [2024-11-20 14:26:41.934318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:03.211 [2024-11-20 14:26:41.934452] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:03.211 [2024-11-20 14:26:41.934472] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:03.211 [2024-11-20 14:26:41.934490] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:03.211 [2024-11-20 14:26:41.934533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:03.211 spare 00:16:03.211 [2024-11-20 14:26:41.948577] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:16:03.211 14:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.211 14:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:03.211 [2024-11-20 14:26:41.951128] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:04.145 14:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.145 14:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.145 14:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.145 14:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.145 14:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.145 14:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.145 14:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.145 14:26:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.145 14:26:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.145 14:26:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.145 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.145 "name": "raid_bdev1", 00:16:04.145 "uuid": "65f175e8-f163-4252-9f9c-596ffb81b740", 00:16:04.145 "strip_size_kb": 0, 00:16:04.145 "state": "online", 00:16:04.145 "raid_level": "raid1", 00:16:04.145 "superblock": true, 00:16:04.145 "num_base_bdevs": 4, 00:16:04.145 "num_base_bdevs_discovered": 3, 00:16:04.145 "num_base_bdevs_operational": 3, 00:16:04.145 "process": { 00:16:04.145 "type": "rebuild", 00:16:04.145 "target": "spare", 00:16:04.145 "progress": { 00:16:04.145 "blocks": 20480, 00:16:04.145 "percent": 32 00:16:04.145 } 00:16:04.145 }, 00:16:04.145 "base_bdevs_list": [ 00:16:04.145 { 00:16:04.145 "name": "spare", 00:16:04.145 "uuid": "89ea3881-b224-5605-90b9-d234d4be48e2", 00:16:04.145 "is_configured": true, 00:16:04.145 "data_offset": 2048, 00:16:04.145 "data_size": 63488 00:16:04.145 }, 00:16:04.146 { 00:16:04.146 "name": null, 00:16:04.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.146 "is_configured": false, 00:16:04.146 "data_offset": 2048, 00:16:04.146 "data_size": 63488 00:16:04.146 }, 00:16:04.146 { 00:16:04.146 "name": "BaseBdev3", 00:16:04.146 "uuid": "4993494f-c0ba-532a-852f-b0954456be49", 00:16:04.146 "is_configured": true, 00:16:04.146 "data_offset": 2048, 00:16:04.146 "data_size": 63488 00:16:04.146 }, 00:16:04.146 { 00:16:04.146 "name": "BaseBdev4", 00:16:04.146 "uuid": "646b0bbe-f307-54f3-8e1a-e673e28d4c2a", 00:16:04.146 "is_configured": true, 00:16:04.146 "data_offset": 2048, 00:16:04.146 "data_size": 63488 00:16:04.146 } 00:16:04.146 ] 00:16:04.146 }' 00:16:04.146 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.146 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.146 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.146 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.146 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:04.146 14:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.146 14:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.146 [2024-11-20 14:26:43.116747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:04.404 [2024-11-20 14:26:43.159833] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:04.404 [2024-11-20 14:26:43.160057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.404 [2024-11-20 14:26:43.160087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:04.404 [2024-11-20 14:26:43.160102] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:04.404 14:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.404 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:04.404 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.404 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.404 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.404 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.404 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:04.404 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.404 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.404 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.404 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.404 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.404 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.404 14:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.404 14:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.404 14:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.404 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.404 "name": "raid_bdev1", 00:16:04.404 "uuid": "65f175e8-f163-4252-9f9c-596ffb81b740", 00:16:04.404 "strip_size_kb": 0, 00:16:04.404 "state": "online", 00:16:04.404 "raid_level": "raid1", 00:16:04.404 "superblock": true, 00:16:04.404 "num_base_bdevs": 4, 00:16:04.404 "num_base_bdevs_discovered": 2, 00:16:04.404 "num_base_bdevs_operational": 2, 00:16:04.404 "base_bdevs_list": [ 00:16:04.404 { 00:16:04.404 "name": null, 00:16:04.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.404 "is_configured": false, 00:16:04.404 "data_offset": 0, 00:16:04.404 "data_size": 63488 00:16:04.404 }, 00:16:04.404 { 00:16:04.404 "name": null, 00:16:04.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.404 "is_configured": false, 00:16:04.404 "data_offset": 2048, 00:16:04.404 "data_size": 63488 00:16:04.404 }, 00:16:04.404 { 00:16:04.404 "name": "BaseBdev3", 00:16:04.404 "uuid": "4993494f-c0ba-532a-852f-b0954456be49", 00:16:04.404 "is_configured": true, 00:16:04.404 "data_offset": 2048, 00:16:04.404 "data_size": 63488 00:16:04.404 }, 00:16:04.404 { 00:16:04.404 "name": "BaseBdev4", 00:16:04.404 "uuid": "646b0bbe-f307-54f3-8e1a-e673e28d4c2a", 00:16:04.404 "is_configured": true, 00:16:04.404 "data_offset": 2048, 00:16:04.404 "data_size": 63488 00:16:04.404 } 00:16:04.404 ] 00:16:04.404 }' 00:16:04.404 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.404 14:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.971 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:04.971 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.971 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:04.971 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:04.971 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.971 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.971 14:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.971 14:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.971 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.971 14:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.971 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.971 "name": "raid_bdev1", 00:16:04.971 "uuid": "65f175e8-f163-4252-9f9c-596ffb81b740", 00:16:04.971 "strip_size_kb": 0, 00:16:04.971 "state": "online", 00:16:04.971 "raid_level": "raid1", 00:16:04.971 "superblock": true, 00:16:04.971 "num_base_bdevs": 4, 00:16:04.971 "num_base_bdevs_discovered": 2, 00:16:04.971 "num_base_bdevs_operational": 2, 00:16:04.971 "base_bdevs_list": [ 00:16:04.971 { 00:16:04.971 "name": null, 00:16:04.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.971 "is_configured": false, 00:16:04.971 "data_offset": 0, 00:16:04.971 "data_size": 63488 00:16:04.971 }, 00:16:04.971 { 00:16:04.971 "name": null, 00:16:04.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.971 "is_configured": false, 00:16:04.971 "data_offset": 2048, 00:16:04.971 "data_size": 63488 00:16:04.971 }, 00:16:04.971 { 00:16:04.971 "name": "BaseBdev3", 00:16:04.971 "uuid": "4993494f-c0ba-532a-852f-b0954456be49", 00:16:04.971 "is_configured": true, 00:16:04.971 "data_offset": 2048, 00:16:04.971 "data_size": 63488 00:16:04.971 }, 00:16:04.971 { 00:16:04.971 "name": "BaseBdev4", 00:16:04.971 "uuid": "646b0bbe-f307-54f3-8e1a-e673e28d4c2a", 00:16:04.971 "is_configured": true, 00:16:04.971 "data_offset": 2048, 00:16:04.971 "data_size": 63488 00:16:04.971 } 00:16:04.971 ] 00:16:04.971 }' 00:16:04.971 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.971 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:04.971 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.971 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:04.971 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:04.971 14:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.971 14:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.971 14:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.971 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:04.971 14:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.971 14:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.971 [2024-11-20 14:26:43.875678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:04.971 [2024-11-20 14:26:43.875910] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.971 [2024-11-20 14:26:43.875951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:04.971 [2024-11-20 14:26:43.875969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.971 [2024-11-20 14:26:43.876573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.971 [2024-11-20 14:26:43.876604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:04.971 [2024-11-20 14:26:43.876704] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:04.971 [2024-11-20 14:26:43.876728] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:04.971 [2024-11-20 14:26:43.876739] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:04.971 [2024-11-20 14:26:43.876768] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:04.971 BaseBdev1 00:16:04.971 14:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.971 14:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:05.906 14:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:05.906 14:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.906 14:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.906 14:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.906 14:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.906 14:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:05.906 14:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.906 14:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.906 14:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.906 14:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.163 14:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.163 14:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.163 14:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.163 14:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.163 14:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.163 14:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.163 "name": "raid_bdev1", 00:16:06.163 "uuid": "65f175e8-f163-4252-9f9c-596ffb81b740", 00:16:06.163 "strip_size_kb": 0, 00:16:06.163 "state": "online", 00:16:06.163 "raid_level": "raid1", 00:16:06.163 "superblock": true, 00:16:06.163 "num_base_bdevs": 4, 00:16:06.163 "num_base_bdevs_discovered": 2, 00:16:06.163 "num_base_bdevs_operational": 2, 00:16:06.163 "base_bdevs_list": [ 00:16:06.163 { 00:16:06.163 "name": null, 00:16:06.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.163 "is_configured": false, 00:16:06.163 "data_offset": 0, 00:16:06.163 "data_size": 63488 00:16:06.163 }, 00:16:06.163 { 00:16:06.163 "name": null, 00:16:06.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.163 "is_configured": false, 00:16:06.163 "data_offset": 2048, 00:16:06.163 "data_size": 63488 00:16:06.163 }, 00:16:06.163 { 00:16:06.163 "name": "BaseBdev3", 00:16:06.163 "uuid": "4993494f-c0ba-532a-852f-b0954456be49", 00:16:06.163 "is_configured": true, 00:16:06.163 "data_offset": 2048, 00:16:06.163 "data_size": 63488 00:16:06.163 }, 00:16:06.163 { 00:16:06.163 "name": "BaseBdev4", 00:16:06.163 "uuid": "646b0bbe-f307-54f3-8e1a-e673e28d4c2a", 00:16:06.163 "is_configured": true, 00:16:06.163 "data_offset": 2048, 00:16:06.163 "data_size": 63488 00:16:06.163 } 00:16:06.163 ] 00:16:06.163 }' 00:16:06.163 14:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.163 14:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.421 14:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.421 14:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.421 14:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.421 14:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.421 14:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.421 14:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.421 14:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.421 14:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.421 14:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.680 14:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.680 14:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.680 "name": "raid_bdev1", 00:16:06.680 "uuid": "65f175e8-f163-4252-9f9c-596ffb81b740", 00:16:06.680 "strip_size_kb": 0, 00:16:06.680 "state": "online", 00:16:06.680 "raid_level": "raid1", 00:16:06.680 "superblock": true, 00:16:06.680 "num_base_bdevs": 4, 00:16:06.680 "num_base_bdevs_discovered": 2, 00:16:06.680 "num_base_bdevs_operational": 2, 00:16:06.680 "base_bdevs_list": [ 00:16:06.680 { 00:16:06.680 "name": null, 00:16:06.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.680 "is_configured": false, 00:16:06.680 "data_offset": 0, 00:16:06.680 "data_size": 63488 00:16:06.680 }, 00:16:06.680 { 00:16:06.680 "name": null, 00:16:06.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.680 "is_configured": false, 00:16:06.680 "data_offset": 2048, 00:16:06.680 "data_size": 63488 00:16:06.680 }, 00:16:06.680 { 00:16:06.680 "name": "BaseBdev3", 00:16:06.680 "uuid": "4993494f-c0ba-532a-852f-b0954456be49", 00:16:06.680 "is_configured": true, 00:16:06.680 "data_offset": 2048, 00:16:06.680 "data_size": 63488 00:16:06.680 }, 00:16:06.680 { 00:16:06.680 "name": "BaseBdev4", 00:16:06.680 "uuid": "646b0bbe-f307-54f3-8e1a-e673e28d4c2a", 00:16:06.680 "is_configured": true, 00:16:06.680 "data_offset": 2048, 00:16:06.680 "data_size": 63488 00:16:06.680 } 00:16:06.680 ] 00:16:06.680 }' 00:16:06.680 14:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.680 14:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:06.680 14:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.680 14:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:06.680 14:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:06.680 14:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:06.680 14:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:06.680 14:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:06.680 14:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:06.680 14:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:06.680 14:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:06.680 14:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:06.680 14:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.680 14:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.680 [2024-11-20 14:26:45.544264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:06.680 [2024-11-20 14:26:45.544640] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:06.680 [2024-11-20 14:26:45.544669] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:06.680 request: 00:16:06.680 { 00:16:06.680 "base_bdev": "BaseBdev1", 00:16:06.680 "raid_bdev": "raid_bdev1", 00:16:06.680 "method": "bdev_raid_add_base_bdev", 00:16:06.680 "req_id": 1 00:16:06.680 } 00:16:06.680 Got JSON-RPC error response 00:16:06.680 response: 00:16:06.680 { 00:16:06.680 "code": -22, 00:16:06.680 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:06.680 } 00:16:06.680 14:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:06.680 14:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:06.680 14:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:06.680 14:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:06.680 14:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:06.680 14:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:07.616 14:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:07.616 14:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.616 14:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.616 14:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.616 14:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.616 14:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:07.616 14:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.616 14:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.616 14:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.616 14:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.616 14:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.616 14:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.616 14:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.616 14:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.616 14:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.875 14:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.875 "name": "raid_bdev1", 00:16:07.875 "uuid": "65f175e8-f163-4252-9f9c-596ffb81b740", 00:16:07.875 "strip_size_kb": 0, 00:16:07.875 "state": "online", 00:16:07.875 "raid_level": "raid1", 00:16:07.875 "superblock": true, 00:16:07.875 "num_base_bdevs": 4, 00:16:07.875 "num_base_bdevs_discovered": 2, 00:16:07.875 "num_base_bdevs_operational": 2, 00:16:07.875 "base_bdevs_list": [ 00:16:07.875 { 00:16:07.875 "name": null, 00:16:07.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.875 "is_configured": false, 00:16:07.875 "data_offset": 0, 00:16:07.875 "data_size": 63488 00:16:07.875 }, 00:16:07.875 { 00:16:07.875 "name": null, 00:16:07.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.875 "is_configured": false, 00:16:07.875 "data_offset": 2048, 00:16:07.875 "data_size": 63488 00:16:07.875 }, 00:16:07.875 { 00:16:07.875 "name": "BaseBdev3", 00:16:07.875 "uuid": "4993494f-c0ba-532a-852f-b0954456be49", 00:16:07.875 "is_configured": true, 00:16:07.875 "data_offset": 2048, 00:16:07.875 "data_size": 63488 00:16:07.875 }, 00:16:07.875 { 00:16:07.875 "name": "BaseBdev4", 00:16:07.875 "uuid": "646b0bbe-f307-54f3-8e1a-e673e28d4c2a", 00:16:07.875 "is_configured": true, 00:16:07.875 "data_offset": 2048, 00:16:07.875 "data_size": 63488 00:16:07.875 } 00:16:07.875 ] 00:16:07.875 }' 00:16:07.875 14:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.875 14:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.134 14:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:08.134 14:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.134 14:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:08.134 14:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:08.134 14:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.134 14:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.134 14:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.134 14:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.134 14:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.134 14:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.134 14:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.134 "name": "raid_bdev1", 00:16:08.134 "uuid": "65f175e8-f163-4252-9f9c-596ffb81b740", 00:16:08.134 "strip_size_kb": 0, 00:16:08.134 "state": "online", 00:16:08.134 "raid_level": "raid1", 00:16:08.134 "superblock": true, 00:16:08.134 "num_base_bdevs": 4, 00:16:08.134 "num_base_bdevs_discovered": 2, 00:16:08.134 "num_base_bdevs_operational": 2, 00:16:08.134 "base_bdevs_list": [ 00:16:08.134 { 00:16:08.134 "name": null, 00:16:08.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.134 "is_configured": false, 00:16:08.134 "data_offset": 0, 00:16:08.134 "data_size": 63488 00:16:08.134 }, 00:16:08.134 { 00:16:08.134 "name": null, 00:16:08.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.134 "is_configured": false, 00:16:08.134 "data_offset": 2048, 00:16:08.134 "data_size": 63488 00:16:08.134 }, 00:16:08.134 { 00:16:08.134 "name": "BaseBdev3", 00:16:08.134 "uuid": "4993494f-c0ba-532a-852f-b0954456be49", 00:16:08.134 "is_configured": true, 00:16:08.134 "data_offset": 2048, 00:16:08.134 "data_size": 63488 00:16:08.134 }, 00:16:08.134 { 00:16:08.134 "name": "BaseBdev4", 00:16:08.134 "uuid": "646b0bbe-f307-54f3-8e1a-e673e28d4c2a", 00:16:08.134 "is_configured": true, 00:16:08.134 "data_offset": 2048, 00:16:08.134 "data_size": 63488 00:16:08.134 } 00:16:08.134 ] 00:16:08.134 }' 00:16:08.134 14:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.393 14:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:08.393 14:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.393 14:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:08.393 14:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78302 00:16:08.393 14:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78302 ']' 00:16:08.393 14:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78302 00:16:08.393 14:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:08.393 14:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:08.393 14:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78302 00:16:08.393 killing process with pid 78302 00:16:08.393 Received shutdown signal, test time was about 60.000000 seconds 00:16:08.393 00:16:08.393 Latency(us) 00:16:08.393 [2024-11-20T14:26:47.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.393 [2024-11-20T14:26:47.375Z] =================================================================================================================== 00:16:08.393 [2024-11-20T14:26:47.375Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:08.393 14:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:08.393 14:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:08.393 14:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78302' 00:16:08.393 14:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78302 00:16:08.393 [2024-11-20 14:26:47.252861] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:08.393 14:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78302 00:16:08.393 [2024-11-20 14:26:47.253030] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.394 [2024-11-20 14:26:47.253129] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:08.394 [2024-11-20 14:26:47.253145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:08.961 [2024-11-20 14:26:47.687012] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:09.897 00:16:09.897 real 0m29.693s 00:16:09.897 user 0m36.274s 00:16:09.897 sys 0m4.147s 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.897 ************************************ 00:16:09.897 END TEST raid_rebuild_test_sb 00:16:09.897 ************************************ 00:16:09.897 14:26:48 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:16:09.897 14:26:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:09.897 14:26:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.897 14:26:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:09.897 ************************************ 00:16:09.897 START TEST raid_rebuild_test_io 00:16:09.897 ************************************ 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:09.897 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:09.898 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:09.898 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79095 00:16:09.898 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79095 00:16:09.898 14:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:09.898 14:26:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79095 ']' 00:16:09.898 14:26:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.898 14:26:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:09.898 14:26:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.898 14:26:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:09.898 14:26:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.156 [2024-11-20 14:26:48.905635] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:16:10.156 [2024-11-20 14:26:48.906069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79095 ] 00:16:10.156 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:10.156 Zero copy mechanism will not be used. 00:16:10.156 [2024-11-20 14:26:49.099365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.416 [2024-11-20 14:26:49.254654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.674 [2024-11-20 14:26:49.477040] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:10.674 [2024-11-20 14:26:49.477145] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:10.934 14:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:10.934 14:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:16:10.934 14:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:10.934 14:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:10.934 14:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.934 14:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.934 BaseBdev1_malloc 00:16:10.934 14:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.934 14:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:10.934 14:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.934 14:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.934 [2024-11-20 14:26:49.903455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:10.934 [2024-11-20 14:26:49.903687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.934 [2024-11-20 14:26:49.903769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:10.934 [2024-11-20 14:26:49.903797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.934 [2024-11-20 14:26:49.906693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.934 [2024-11-20 14:26:49.906867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:10.934 BaseBdev1 00:16:10.934 14:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.934 14:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:10.934 14:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:10.934 14:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.934 14:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.201 BaseBdev2_malloc 00:16:11.201 14:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.201 14:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:11.201 14:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.201 14:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.201 [2024-11-20 14:26:49.951962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:11.201 [2024-11-20 14:26:49.952187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.201 [2024-11-20 14:26:49.952266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:11.201 [2024-11-20 14:26:49.952500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.201 [2024-11-20 14:26:49.955360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.201 [2024-11-20 14:26:49.955419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:11.201 BaseBdev2 00:16:11.201 14:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.201 14:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:11.201 14:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:11.201 14:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.201 14:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.201 BaseBdev3_malloc 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.201 [2024-11-20 14:26:50.016290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:11.201 [2024-11-20 14:26:50.016503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.201 [2024-11-20 14:26:50.016590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:11.201 [2024-11-20 14:26:50.016748] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.201 [2024-11-20 14:26:50.019583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.201 [2024-11-20 14:26:50.019640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:11.201 BaseBdev3 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.201 BaseBdev4_malloc 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.201 [2024-11-20 14:26:50.069512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:11.201 [2024-11-20 14:26:50.069746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.201 [2024-11-20 14:26:50.069828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:11.201 [2024-11-20 14:26:50.069961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.201 [2024-11-20 14:26:50.072855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.201 BaseBdev4 00:16:11.201 [2024-11-20 14:26:50.073109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.201 spare_malloc 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.201 spare_delay 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.201 [2024-11-20 14:26:50.135298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:11.201 [2024-11-20 14:26:50.135507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.201 [2024-11-20 14:26:50.135551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:11.201 [2024-11-20 14:26:50.135575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.201 spare 00:16:11.201 [2024-11-20 14:26:50.138385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.201 [2024-11-20 14:26:50.138437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.201 [2024-11-20 14:26:50.143412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:11.201 [2024-11-20 14:26:50.145891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:11.201 [2024-11-20 14:26:50.145981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:11.201 [2024-11-20 14:26:50.146286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:11.201 [2024-11-20 14:26:50.146455] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:11.201 [2024-11-20 14:26:50.146540] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:11.201 [2024-11-20 14:26:50.147055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:11.201 [2024-11-20 14:26:50.147467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:11.201 [2024-11-20 14:26:50.147607] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:11.201 [2024-11-20 14:26:50.148036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.201 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.202 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.202 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.202 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.202 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.202 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.202 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.469 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.469 "name": "raid_bdev1", 00:16:11.469 "uuid": "48d19222-3dff-452e-8dee-d48483f15b98", 00:16:11.469 "strip_size_kb": 0, 00:16:11.469 "state": "online", 00:16:11.469 "raid_level": "raid1", 00:16:11.469 "superblock": false, 00:16:11.469 "num_base_bdevs": 4, 00:16:11.469 "num_base_bdevs_discovered": 4, 00:16:11.469 "num_base_bdevs_operational": 4, 00:16:11.470 "base_bdevs_list": [ 00:16:11.470 { 00:16:11.470 "name": "BaseBdev1", 00:16:11.470 "uuid": "40ab20ac-f285-5970-8e0e-4e5d073f03a1", 00:16:11.470 "is_configured": true, 00:16:11.470 "data_offset": 0, 00:16:11.470 "data_size": 65536 00:16:11.470 }, 00:16:11.470 { 00:16:11.470 "name": "BaseBdev2", 00:16:11.470 "uuid": "53ca703c-ae2d-58e5-bcb1-72b0860d3af8", 00:16:11.470 "is_configured": true, 00:16:11.470 "data_offset": 0, 00:16:11.470 "data_size": 65536 00:16:11.470 }, 00:16:11.470 { 00:16:11.470 "name": "BaseBdev3", 00:16:11.470 "uuid": "8ec18414-4d5c-51bc-996a-bbad87f44a65", 00:16:11.470 "is_configured": true, 00:16:11.470 "data_offset": 0, 00:16:11.470 "data_size": 65536 00:16:11.470 }, 00:16:11.470 { 00:16:11.470 "name": "BaseBdev4", 00:16:11.470 "uuid": "d5de192c-4d65-553b-85bb-32d40e3d1104", 00:16:11.470 "is_configured": true, 00:16:11.470 "data_offset": 0, 00:16:11.470 "data_size": 65536 00:16:11.470 } 00:16:11.470 ] 00:16:11.470 }' 00:16:11.470 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.470 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.727 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:11.727 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.727 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.727 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:11.727 [2024-11-20 14:26:50.660568] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.727 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.727 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:11.727 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.727 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.727 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.727 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.986 [2024-11-20 14:26:50.760119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.986 "name": "raid_bdev1", 00:16:11.986 "uuid": "48d19222-3dff-452e-8dee-d48483f15b98", 00:16:11.986 "strip_size_kb": 0, 00:16:11.986 "state": "online", 00:16:11.986 "raid_level": "raid1", 00:16:11.986 "superblock": false, 00:16:11.986 "num_base_bdevs": 4, 00:16:11.986 "num_base_bdevs_discovered": 3, 00:16:11.986 "num_base_bdevs_operational": 3, 00:16:11.986 "base_bdevs_list": [ 00:16:11.986 { 00:16:11.986 "name": null, 00:16:11.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.986 "is_configured": false, 00:16:11.986 "data_offset": 0, 00:16:11.986 "data_size": 65536 00:16:11.986 }, 00:16:11.986 { 00:16:11.986 "name": "BaseBdev2", 00:16:11.986 "uuid": "53ca703c-ae2d-58e5-bcb1-72b0860d3af8", 00:16:11.986 "is_configured": true, 00:16:11.986 "data_offset": 0, 00:16:11.986 "data_size": 65536 00:16:11.986 }, 00:16:11.986 { 00:16:11.986 "name": "BaseBdev3", 00:16:11.986 "uuid": "8ec18414-4d5c-51bc-996a-bbad87f44a65", 00:16:11.986 "is_configured": true, 00:16:11.986 "data_offset": 0, 00:16:11.986 "data_size": 65536 00:16:11.986 }, 00:16:11.986 { 00:16:11.986 "name": "BaseBdev4", 00:16:11.986 "uuid": "d5de192c-4d65-553b-85bb-32d40e3d1104", 00:16:11.986 "is_configured": true, 00:16:11.986 "data_offset": 0, 00:16:11.986 "data_size": 65536 00:16:11.986 } 00:16:11.986 ] 00:16:11.986 }' 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.986 14:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.986 [2024-11-20 14:26:50.908296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:11.986 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:11.986 Zero copy mechanism will not be used. 00:16:11.986 Running I/O for 60 seconds... 00:16:12.555 14:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:12.555 14:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.555 14:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.555 [2024-11-20 14:26:51.280266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:12.555 14:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.555 14:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:12.555 [2024-11-20 14:26:51.347782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:12.555 [2024-11-20 14:26:51.350555] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:12.555 [2024-11-20 14:26:51.452555] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:12.555 [2024-11-20 14:26:51.453221] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:12.813 [2024-11-20 14:26:51.584937] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:12.813 [2024-11-20 14:26:51.585328] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:13.072 [2024-11-20 14:26:51.822918] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:13.072 [2024-11-20 14:26:51.823850] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:13.072 138.00 IOPS, 414.00 MiB/s [2024-11-20T14:26:52.054Z] [2024-11-20 14:26:51.972701] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:13.072 [2024-11-20 14:26:51.973767] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:13.639 [2024-11-20 14:26:52.331185] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:13.639 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.639 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.639 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.639 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.639 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.639 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.639 14:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.639 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.639 14:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.639 14:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.639 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.639 "name": "raid_bdev1", 00:16:13.639 "uuid": "48d19222-3dff-452e-8dee-d48483f15b98", 00:16:13.639 "strip_size_kb": 0, 00:16:13.639 "state": "online", 00:16:13.639 "raid_level": "raid1", 00:16:13.639 "superblock": false, 00:16:13.639 "num_base_bdevs": 4, 00:16:13.639 "num_base_bdevs_discovered": 4, 00:16:13.639 "num_base_bdevs_operational": 4, 00:16:13.639 "process": { 00:16:13.639 "type": "rebuild", 00:16:13.639 "target": "spare", 00:16:13.639 "progress": { 00:16:13.639 "blocks": 14336, 00:16:13.639 "percent": 21 00:16:13.639 } 00:16:13.639 }, 00:16:13.639 "base_bdevs_list": [ 00:16:13.639 { 00:16:13.639 "name": "spare", 00:16:13.639 "uuid": "030b438e-5aff-583b-851c-123559377ddf", 00:16:13.639 "is_configured": true, 00:16:13.639 "data_offset": 0, 00:16:13.639 "data_size": 65536 00:16:13.639 }, 00:16:13.639 { 00:16:13.639 "name": "BaseBdev2", 00:16:13.639 "uuid": "53ca703c-ae2d-58e5-bcb1-72b0860d3af8", 00:16:13.639 "is_configured": true, 00:16:13.639 "data_offset": 0, 00:16:13.639 "data_size": 65536 00:16:13.639 }, 00:16:13.639 { 00:16:13.639 "name": "BaseBdev3", 00:16:13.639 "uuid": "8ec18414-4d5c-51bc-996a-bbad87f44a65", 00:16:13.639 "is_configured": true, 00:16:13.639 "data_offset": 0, 00:16:13.639 "data_size": 65536 00:16:13.639 }, 00:16:13.639 { 00:16:13.639 "name": "BaseBdev4", 00:16:13.639 "uuid": "d5de192c-4d65-553b-85bb-32d40e3d1104", 00:16:13.639 "is_configured": true, 00:16:13.639 "data_offset": 0, 00:16:13.639 "data_size": 65536 00:16:13.639 } 00:16:13.639 ] 00:16:13.640 }' 00:16:13.640 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.640 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.640 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.640 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.640 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:13.640 14:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.640 14:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.640 [2024-11-20 14:26:52.509136] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.640 [2024-11-20 14:26:52.584164] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:13.898 [2024-11-20 14:26:52.628969] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:13.898 [2024-11-20 14:26:52.644212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.898 [2024-11-20 14:26:52.644291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.898 [2024-11-20 14:26:52.644319] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:13.898 [2024-11-20 14:26:52.694645] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:16:13.898 14:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.898 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:13.898 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.898 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.898 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.898 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.898 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:13.898 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.898 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.898 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.898 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.898 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.898 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.898 14:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.898 14:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.898 14:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.898 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.898 "name": "raid_bdev1", 00:16:13.898 "uuid": "48d19222-3dff-452e-8dee-d48483f15b98", 00:16:13.898 "strip_size_kb": 0, 00:16:13.898 "state": "online", 00:16:13.898 "raid_level": "raid1", 00:16:13.898 "superblock": false, 00:16:13.898 "num_base_bdevs": 4, 00:16:13.898 "num_base_bdevs_discovered": 3, 00:16:13.898 "num_base_bdevs_operational": 3, 00:16:13.898 "base_bdevs_list": [ 00:16:13.898 { 00:16:13.898 "name": null, 00:16:13.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.899 "is_configured": false, 00:16:13.899 "data_offset": 0, 00:16:13.899 "data_size": 65536 00:16:13.899 }, 00:16:13.899 { 00:16:13.899 "name": "BaseBdev2", 00:16:13.899 "uuid": "53ca703c-ae2d-58e5-bcb1-72b0860d3af8", 00:16:13.899 "is_configured": true, 00:16:13.899 "data_offset": 0, 00:16:13.899 "data_size": 65536 00:16:13.899 }, 00:16:13.899 { 00:16:13.899 "name": "BaseBdev3", 00:16:13.899 "uuid": "8ec18414-4d5c-51bc-996a-bbad87f44a65", 00:16:13.899 "is_configured": true, 00:16:13.899 "data_offset": 0, 00:16:13.899 "data_size": 65536 00:16:13.899 }, 00:16:13.899 { 00:16:13.899 "name": "BaseBdev4", 00:16:13.899 "uuid": "d5de192c-4d65-553b-85bb-32d40e3d1104", 00:16:13.899 "is_configured": true, 00:16:13.899 "data_offset": 0, 00:16:13.899 "data_size": 65536 00:16:13.899 } 00:16:13.899 ] 00:16:13.899 }' 00:16:13.899 14:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.899 14:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.416 118.50 IOPS, 355.50 MiB/s [2024-11-20T14:26:53.398Z] 14:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.416 14:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.416 14:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.416 14:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.416 14:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.416 14:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.416 14:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.416 14:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.416 14:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.416 14:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.416 14:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.416 "name": "raid_bdev1", 00:16:14.416 "uuid": "48d19222-3dff-452e-8dee-d48483f15b98", 00:16:14.416 "strip_size_kb": 0, 00:16:14.416 "state": "online", 00:16:14.416 "raid_level": "raid1", 00:16:14.416 "superblock": false, 00:16:14.416 "num_base_bdevs": 4, 00:16:14.416 "num_base_bdevs_discovered": 3, 00:16:14.416 "num_base_bdevs_operational": 3, 00:16:14.416 "base_bdevs_list": [ 00:16:14.416 { 00:16:14.416 "name": null, 00:16:14.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.416 "is_configured": false, 00:16:14.416 "data_offset": 0, 00:16:14.416 "data_size": 65536 00:16:14.416 }, 00:16:14.416 { 00:16:14.416 "name": "BaseBdev2", 00:16:14.416 "uuid": "53ca703c-ae2d-58e5-bcb1-72b0860d3af8", 00:16:14.416 "is_configured": true, 00:16:14.416 "data_offset": 0, 00:16:14.416 "data_size": 65536 00:16:14.416 }, 00:16:14.416 { 00:16:14.416 "name": "BaseBdev3", 00:16:14.416 "uuid": "8ec18414-4d5c-51bc-996a-bbad87f44a65", 00:16:14.416 "is_configured": true, 00:16:14.416 "data_offset": 0, 00:16:14.416 "data_size": 65536 00:16:14.416 }, 00:16:14.416 { 00:16:14.416 "name": "BaseBdev4", 00:16:14.416 "uuid": "d5de192c-4d65-553b-85bb-32d40e3d1104", 00:16:14.416 "is_configured": true, 00:16:14.416 "data_offset": 0, 00:16:14.416 "data_size": 65536 00:16:14.416 } 00:16:14.416 ] 00:16:14.416 }' 00:16:14.416 14:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.416 14:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:14.416 14:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.675 14:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:14.675 14:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:14.675 14:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.675 14:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.675 [2024-11-20 14:26:53.420315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:14.675 14:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.675 14:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:14.675 [2024-11-20 14:26:53.508583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:14.675 [2024-11-20 14:26:53.511375] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:14.675 [2024-11-20 14:26:53.620626] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:14.676 [2024-11-20 14:26:53.621355] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:14.935 [2024-11-20 14:26:53.835789] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:14.935 [2024-11-20 14:26:53.836356] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:15.453 125.00 IOPS, 375.00 MiB/s [2024-11-20T14:26:54.435Z] [2024-11-20 14:26:54.252607] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:15.711 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.711 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.711 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.711 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.711 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.711 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.711 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.711 14:26:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.711 14:26:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.711 14:26:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.711 [2024-11-20 14:26:54.501025] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:15.711 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.711 "name": "raid_bdev1", 00:16:15.711 "uuid": "48d19222-3dff-452e-8dee-d48483f15b98", 00:16:15.711 "strip_size_kb": 0, 00:16:15.711 "state": "online", 00:16:15.711 "raid_level": "raid1", 00:16:15.711 "superblock": false, 00:16:15.711 "num_base_bdevs": 4, 00:16:15.711 "num_base_bdevs_discovered": 4, 00:16:15.711 "num_base_bdevs_operational": 4, 00:16:15.711 "process": { 00:16:15.711 "type": "rebuild", 00:16:15.711 "target": "spare", 00:16:15.711 "progress": { 00:16:15.711 "blocks": 12288, 00:16:15.711 "percent": 18 00:16:15.711 } 00:16:15.711 }, 00:16:15.711 "base_bdevs_list": [ 00:16:15.711 { 00:16:15.711 "name": "spare", 00:16:15.711 "uuid": "030b438e-5aff-583b-851c-123559377ddf", 00:16:15.711 "is_configured": true, 00:16:15.711 "data_offset": 0, 00:16:15.711 "data_size": 65536 00:16:15.711 }, 00:16:15.711 { 00:16:15.711 "name": "BaseBdev2", 00:16:15.711 "uuid": "53ca703c-ae2d-58e5-bcb1-72b0860d3af8", 00:16:15.711 "is_configured": true, 00:16:15.711 "data_offset": 0, 00:16:15.711 "data_size": 65536 00:16:15.711 }, 00:16:15.711 { 00:16:15.711 "name": "BaseBdev3", 00:16:15.711 "uuid": "8ec18414-4d5c-51bc-996a-bbad87f44a65", 00:16:15.711 "is_configured": true, 00:16:15.711 "data_offset": 0, 00:16:15.711 "data_size": 65536 00:16:15.711 }, 00:16:15.711 { 00:16:15.711 "name": "BaseBdev4", 00:16:15.711 "uuid": "d5de192c-4d65-553b-85bb-32d40e3d1104", 00:16:15.711 "is_configured": true, 00:16:15.711 "data_offset": 0, 00:16:15.711 "data_size": 65536 00:16:15.711 } 00:16:15.711 ] 00:16:15.711 }' 00:16:15.711 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.711 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.711 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.711 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.711 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:15.711 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:15.711 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:15.711 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:15.711 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:15.711 14:26:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.712 14:26:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.712 [2024-11-20 14:26:54.633491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:15.971 [2024-11-20 14:26:54.733408] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:15.971 [2024-11-20 14:26:54.734051] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:15.971 [2024-11-20 14:26:54.743105] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:15.971 [2024-11-20 14:26:54.743271] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.971 [2024-11-20 14:26:54.779059] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.971 "name": "raid_bdev1", 00:16:15.971 "uuid": "48d19222-3dff-452e-8dee-d48483f15b98", 00:16:15.971 "strip_size_kb": 0, 00:16:15.971 "state": "online", 00:16:15.971 "raid_level": "raid1", 00:16:15.971 "superblock": false, 00:16:15.971 "num_base_bdevs": 4, 00:16:15.971 "num_base_bdevs_discovered": 3, 00:16:15.971 "num_base_bdevs_operational": 3, 00:16:15.971 "process": { 00:16:15.971 "type": "rebuild", 00:16:15.971 "target": "spare", 00:16:15.971 "progress": { 00:16:15.971 "blocks": 16384, 00:16:15.971 "percent": 25 00:16:15.971 } 00:16:15.971 }, 00:16:15.971 "base_bdevs_list": [ 00:16:15.971 { 00:16:15.971 "name": "spare", 00:16:15.971 "uuid": "030b438e-5aff-583b-851c-123559377ddf", 00:16:15.971 "is_configured": true, 00:16:15.971 "data_offset": 0, 00:16:15.971 "data_size": 65536 00:16:15.971 }, 00:16:15.971 { 00:16:15.971 "name": null, 00:16:15.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.971 "is_configured": false, 00:16:15.971 "data_offset": 0, 00:16:15.971 "data_size": 65536 00:16:15.971 }, 00:16:15.971 { 00:16:15.971 "name": "BaseBdev3", 00:16:15.971 "uuid": "8ec18414-4d5c-51bc-996a-bbad87f44a65", 00:16:15.971 "is_configured": true, 00:16:15.971 "data_offset": 0, 00:16:15.971 "data_size": 65536 00:16:15.971 }, 00:16:15.971 { 00:16:15.971 "name": "BaseBdev4", 00:16:15.971 "uuid": "d5de192c-4d65-553b-85bb-32d40e3d1104", 00:16:15.971 "is_configured": true, 00:16:15.971 "data_offset": 0, 00:16:15.971 "data_size": 65536 00:16:15.971 } 00:16:15.971 ] 00:16:15.971 }' 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=521 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.971 14:26:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.971 119.75 IOPS, 359.25 MiB/s [2024-11-20T14:26:54.953Z] 14:26:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.230 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.230 "name": "raid_bdev1", 00:16:16.230 "uuid": "48d19222-3dff-452e-8dee-d48483f15b98", 00:16:16.230 "strip_size_kb": 0, 00:16:16.230 "state": "online", 00:16:16.230 "raid_level": "raid1", 00:16:16.230 "superblock": false, 00:16:16.230 "num_base_bdevs": 4, 00:16:16.230 "num_base_bdevs_discovered": 3, 00:16:16.230 "num_base_bdevs_operational": 3, 00:16:16.230 "process": { 00:16:16.230 "type": "rebuild", 00:16:16.230 "target": "spare", 00:16:16.230 "progress": { 00:16:16.230 "blocks": 16384, 00:16:16.230 "percent": 25 00:16:16.230 } 00:16:16.230 }, 00:16:16.230 "base_bdevs_list": [ 00:16:16.230 { 00:16:16.230 "name": "spare", 00:16:16.230 "uuid": "030b438e-5aff-583b-851c-123559377ddf", 00:16:16.230 "is_configured": true, 00:16:16.230 "data_offset": 0, 00:16:16.230 "data_size": 65536 00:16:16.230 }, 00:16:16.230 { 00:16:16.230 "name": null, 00:16:16.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.230 "is_configured": false, 00:16:16.230 "data_offset": 0, 00:16:16.230 "data_size": 65536 00:16:16.230 }, 00:16:16.230 { 00:16:16.230 "name": "BaseBdev3", 00:16:16.230 "uuid": "8ec18414-4d5c-51bc-996a-bbad87f44a65", 00:16:16.230 "is_configured": true, 00:16:16.230 "data_offset": 0, 00:16:16.230 "data_size": 65536 00:16:16.230 }, 00:16:16.230 { 00:16:16.230 "name": "BaseBdev4", 00:16:16.230 "uuid": "d5de192c-4d65-553b-85bb-32d40e3d1104", 00:16:16.230 "is_configured": true, 00:16:16.230 "data_offset": 0, 00:16:16.230 "data_size": 65536 00:16:16.230 } 00:16:16.230 ] 00:16:16.230 }' 00:16:16.230 14:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.230 14:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.230 14:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.230 14:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.230 14:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:16.489 [2024-11-20 14:26:55.248930] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:17.057 106.80 IOPS, 320.40 MiB/s [2024-11-20T14:26:56.039Z] [2024-11-20 14:26:55.946733] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:16:17.316 14:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.316 14:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.316 14:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.316 14:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.316 14:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.316 14:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.316 14:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.316 14:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.316 14:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.316 14:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.316 14:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.316 14:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.316 "name": "raid_bdev1", 00:16:17.316 "uuid": "48d19222-3dff-452e-8dee-d48483f15b98", 00:16:17.316 "strip_size_kb": 0, 00:16:17.316 "state": "online", 00:16:17.316 "raid_level": "raid1", 00:16:17.316 "superblock": false, 00:16:17.316 "num_base_bdevs": 4, 00:16:17.316 "num_base_bdevs_discovered": 3, 00:16:17.316 "num_base_bdevs_operational": 3, 00:16:17.316 "process": { 00:16:17.316 "type": "rebuild", 00:16:17.316 "target": "spare", 00:16:17.316 "progress": { 00:16:17.316 "blocks": 32768, 00:16:17.316 "percent": 50 00:16:17.316 } 00:16:17.316 }, 00:16:17.316 "base_bdevs_list": [ 00:16:17.316 { 00:16:17.316 "name": "spare", 00:16:17.316 "uuid": "030b438e-5aff-583b-851c-123559377ddf", 00:16:17.316 "is_configured": true, 00:16:17.316 "data_offset": 0, 00:16:17.316 "data_size": 65536 00:16:17.316 }, 00:16:17.316 { 00:16:17.316 "name": null, 00:16:17.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.316 "is_configured": false, 00:16:17.316 "data_offset": 0, 00:16:17.316 "data_size": 65536 00:16:17.316 }, 00:16:17.316 { 00:16:17.316 "name": "BaseBdev3", 00:16:17.316 "uuid": "8ec18414-4d5c-51bc-996a-bbad87f44a65", 00:16:17.316 "is_configured": true, 00:16:17.316 "data_offset": 0, 00:16:17.316 "data_size": 65536 00:16:17.316 }, 00:16:17.316 { 00:16:17.316 "name": "BaseBdev4", 00:16:17.316 "uuid": "d5de192c-4d65-553b-85bb-32d40e3d1104", 00:16:17.316 "is_configured": true, 00:16:17.316 "data_offset": 0, 00:16:17.316 "data_size": 65536 00:16:17.316 } 00:16:17.316 ] 00:16:17.316 }' 00:16:17.316 14:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.316 [2024-11-20 14:26:56.179203] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:17.316 14:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.316 14:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.316 14:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.316 14:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:18.254 [2024-11-20 14:26:56.873964] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:16:18.513 98.83 IOPS, 296.50 MiB/s [2024-11-20T14:26:57.495Z] 14:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:18.513 14:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.513 14:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.513 14:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.513 14:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.513 14:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.513 14:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.513 14:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.513 14:26:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.513 14:26:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.513 14:26:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.513 14:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.513 "name": "raid_bdev1", 00:16:18.513 "uuid": "48d19222-3dff-452e-8dee-d48483f15b98", 00:16:18.513 "strip_size_kb": 0, 00:16:18.513 "state": "online", 00:16:18.513 "raid_level": "raid1", 00:16:18.513 "superblock": false, 00:16:18.513 "num_base_bdevs": 4, 00:16:18.513 "num_base_bdevs_discovered": 3, 00:16:18.514 "num_base_bdevs_operational": 3, 00:16:18.514 "process": { 00:16:18.514 "type": "rebuild", 00:16:18.514 "target": "spare", 00:16:18.514 "progress": { 00:16:18.514 "blocks": 53248, 00:16:18.514 "percent": 81 00:16:18.514 } 00:16:18.514 }, 00:16:18.514 "base_bdevs_list": [ 00:16:18.514 { 00:16:18.514 "name": "spare", 00:16:18.514 "uuid": "030b438e-5aff-583b-851c-123559377ddf", 00:16:18.514 "is_configured": true, 00:16:18.514 "data_offset": 0, 00:16:18.514 "data_size": 65536 00:16:18.514 }, 00:16:18.514 { 00:16:18.514 "name": null, 00:16:18.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.514 "is_configured": false, 00:16:18.514 "data_offset": 0, 00:16:18.514 "data_size": 65536 00:16:18.514 }, 00:16:18.514 { 00:16:18.514 "name": "BaseBdev3", 00:16:18.514 "uuid": "8ec18414-4d5c-51bc-996a-bbad87f44a65", 00:16:18.514 "is_configured": true, 00:16:18.514 "data_offset": 0, 00:16:18.514 "data_size": 65536 00:16:18.514 }, 00:16:18.514 { 00:16:18.514 "name": "BaseBdev4", 00:16:18.514 "uuid": "d5de192c-4d65-553b-85bb-32d40e3d1104", 00:16:18.514 "is_configured": true, 00:16:18.514 "data_offset": 0, 00:16:18.514 "data_size": 65536 00:16:18.514 } 00:16:18.514 ] 00:16:18.514 }' 00:16:18.514 14:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.514 14:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.514 14:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.514 14:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.514 14:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:19.080 [2024-11-20 14:26:57.870979] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:19.080 90.00 IOPS, 270.00 MiB/s [2024-11-20T14:26:58.062Z] [2024-11-20 14:26:57.979297] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:19.080 [2024-11-20 14:26:57.982694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.647 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:19.647 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.647 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.647 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.647 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.647 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.647 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.647 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.647 14:26:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.647 14:26:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.647 14:26:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.647 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.647 "name": "raid_bdev1", 00:16:19.647 "uuid": "48d19222-3dff-452e-8dee-d48483f15b98", 00:16:19.647 "strip_size_kb": 0, 00:16:19.647 "state": "online", 00:16:19.647 "raid_level": "raid1", 00:16:19.647 "superblock": false, 00:16:19.647 "num_base_bdevs": 4, 00:16:19.647 "num_base_bdevs_discovered": 3, 00:16:19.647 "num_base_bdevs_operational": 3, 00:16:19.647 "base_bdevs_list": [ 00:16:19.647 { 00:16:19.647 "name": "spare", 00:16:19.647 "uuid": "030b438e-5aff-583b-851c-123559377ddf", 00:16:19.647 "is_configured": true, 00:16:19.647 "data_offset": 0, 00:16:19.647 "data_size": 65536 00:16:19.647 }, 00:16:19.647 { 00:16:19.647 "name": null, 00:16:19.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.647 "is_configured": false, 00:16:19.647 "data_offset": 0, 00:16:19.647 "data_size": 65536 00:16:19.647 }, 00:16:19.647 { 00:16:19.647 "name": "BaseBdev3", 00:16:19.647 "uuid": "8ec18414-4d5c-51bc-996a-bbad87f44a65", 00:16:19.647 "is_configured": true, 00:16:19.647 "data_offset": 0, 00:16:19.647 "data_size": 65536 00:16:19.647 }, 00:16:19.647 { 00:16:19.647 "name": "BaseBdev4", 00:16:19.647 "uuid": "d5de192c-4d65-553b-85bb-32d40e3d1104", 00:16:19.647 "is_configured": true, 00:16:19.647 "data_offset": 0, 00:16:19.647 "data_size": 65536 00:16:19.647 } 00:16:19.647 ] 00:16:19.647 }' 00:16:19.647 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.647 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:19.647 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.647 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:19.647 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:16:19.648 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:19.648 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.648 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:19.648 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:19.648 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.648 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.648 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.648 14:26:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.648 14:26:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.648 14:26:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.943 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.943 "name": "raid_bdev1", 00:16:19.943 "uuid": "48d19222-3dff-452e-8dee-d48483f15b98", 00:16:19.943 "strip_size_kb": 0, 00:16:19.943 "state": "online", 00:16:19.943 "raid_level": "raid1", 00:16:19.943 "superblock": false, 00:16:19.943 "num_base_bdevs": 4, 00:16:19.943 "num_base_bdevs_discovered": 3, 00:16:19.943 "num_base_bdevs_operational": 3, 00:16:19.943 "base_bdevs_list": [ 00:16:19.943 { 00:16:19.943 "name": "spare", 00:16:19.943 "uuid": "030b438e-5aff-583b-851c-123559377ddf", 00:16:19.943 "is_configured": true, 00:16:19.943 "data_offset": 0, 00:16:19.943 "data_size": 65536 00:16:19.943 }, 00:16:19.943 { 00:16:19.943 "name": null, 00:16:19.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.943 "is_configured": false, 00:16:19.943 "data_offset": 0, 00:16:19.943 "data_size": 65536 00:16:19.943 }, 00:16:19.943 { 00:16:19.943 "name": "BaseBdev3", 00:16:19.943 "uuid": "8ec18414-4d5c-51bc-996a-bbad87f44a65", 00:16:19.943 "is_configured": true, 00:16:19.943 "data_offset": 0, 00:16:19.943 "data_size": 65536 00:16:19.943 }, 00:16:19.943 { 00:16:19.943 "name": "BaseBdev4", 00:16:19.943 "uuid": "d5de192c-4d65-553b-85bb-32d40e3d1104", 00:16:19.943 "is_configured": true, 00:16:19.943 "data_offset": 0, 00:16:19.943 "data_size": 65536 00:16:19.943 } 00:16:19.943 ] 00:16:19.943 }' 00:16:19.943 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.943 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:19.943 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.943 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:19.943 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:19.944 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.944 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.944 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:19.944 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:19.944 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.944 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.944 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.944 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.944 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.944 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.944 14:26:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.944 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.944 14:26:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.944 14:26:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.944 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.944 "name": "raid_bdev1", 00:16:19.944 "uuid": "48d19222-3dff-452e-8dee-d48483f15b98", 00:16:19.944 "strip_size_kb": 0, 00:16:19.944 "state": "online", 00:16:19.944 "raid_level": "raid1", 00:16:19.944 "superblock": false, 00:16:19.944 "num_base_bdevs": 4, 00:16:19.944 "num_base_bdevs_discovered": 3, 00:16:19.944 "num_base_bdevs_operational": 3, 00:16:19.944 "base_bdevs_list": [ 00:16:19.944 { 00:16:19.944 "name": "spare", 00:16:19.944 "uuid": "030b438e-5aff-583b-851c-123559377ddf", 00:16:19.944 "is_configured": true, 00:16:19.944 "data_offset": 0, 00:16:19.944 "data_size": 65536 00:16:19.944 }, 00:16:19.944 { 00:16:19.944 "name": null, 00:16:19.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.944 "is_configured": false, 00:16:19.944 "data_offset": 0, 00:16:19.944 "data_size": 65536 00:16:19.944 }, 00:16:19.944 { 00:16:19.944 "name": "BaseBdev3", 00:16:19.944 "uuid": "8ec18414-4d5c-51bc-996a-bbad87f44a65", 00:16:19.944 "is_configured": true, 00:16:19.944 "data_offset": 0, 00:16:19.944 "data_size": 65536 00:16:19.944 }, 00:16:19.944 { 00:16:19.944 "name": "BaseBdev4", 00:16:19.944 "uuid": "d5de192c-4d65-553b-85bb-32d40e3d1104", 00:16:19.944 "is_configured": true, 00:16:19.944 "data_offset": 0, 00:16:19.944 "data_size": 65536 00:16:19.944 } 00:16:19.944 ] 00:16:19.944 }' 00:16:19.944 14:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.944 14:26:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.466 82.88 IOPS, 248.62 MiB/s [2024-11-20T14:26:59.448Z] 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:20.466 14:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.466 14:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.466 [2024-11-20 14:26:59.277620] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:20.466 [2024-11-20 14:26:59.277804] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:20.466 00:16:20.466 Latency(us) 00:16:20.466 [2024-11-20T14:26:59.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.466 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:20.466 raid_bdev1 : 8.41 80.89 242.67 0.00 0.00 17275.09 307.20 114390.11 00:16:20.466 [2024-11-20T14:26:59.449Z] =================================================================================================================== 00:16:20.467 [2024-11-20T14:26:59.449Z] Total : 80.89 242.67 0.00 0.00 17275.09 307.20 114390.11 00:16:20.467 [2024-11-20 14:26:59.338214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.467 [2024-11-20 14:26:59.338501] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.467 [2024-11-20 14:26:59.338715] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:16:20.467 "results": [ 00:16:20.467 { 00:16:20.467 "job": "raid_bdev1", 00:16:20.467 "core_mask": "0x1", 00:16:20.467 "workload": "randrw", 00:16:20.467 "percentage": 50, 00:16:20.467 "status": "finished", 00:16:20.467 "queue_depth": 2, 00:16:20.467 "io_size": 3145728, 00:16:20.467 "runtime": 8.406577, 00:16:20.467 "iops": 80.88904675470171, 00:16:20.467 "mibps": 242.66714026410511, 00:16:20.467 "io_failed": 0, 00:16:20.467 "io_timeout": 0, 00:16:20.467 "avg_latency_us": 17275.09219251337, 00:16:20.467 "min_latency_us": 307.2, 00:16:20.467 "max_latency_us": 114390.10909090909 00:16:20.467 } 00:16:20.467 ], 00:16:20.467 "core_count": 1 00:16:20.467 } 00:16:20.467 ee all in destruct 00:16:20.467 [2024-11-20 14:26:59.338906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:20.467 14:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.467 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.467 14:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.467 14:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.467 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:20.467 14:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.467 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:20.467 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:20.467 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:20.467 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:20.467 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:20.467 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:20.467 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:20.467 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:20.467 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:20.467 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:20.467 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:20.467 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:20.467 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:21.099 /dev/nbd0 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:21.099 1+0 records in 00:16:21.099 1+0 records out 00:16:21.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036799 s, 11.1 MB/s 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:21.099 14:26:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:21.369 /dev/nbd1 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:21.369 1+0 records in 00:16:21.369 1+0 records out 00:16:21.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000637667 s, 6.4 MB/s 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:21.369 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:21.942 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:21.942 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:21.942 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:21.942 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:21.942 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:21.942 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:21.942 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:21.942 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:21.942 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:21.942 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:21.942 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:21.942 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:21.942 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:21.942 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:21.942 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:21.942 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:21.942 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:21.942 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:21.942 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:21.942 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:22.203 /dev/nbd1 00:16:22.203 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:22.203 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:22.203 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:22.203 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:22.203 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:22.203 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:22.203 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:22.203 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:22.203 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:22.203 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:22.203 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:22.203 1+0 records in 00:16:22.203 1+0 records out 00:16:22.203 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435001 s, 9.4 MB/s 00:16:22.203 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.203 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:22.203 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.203 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:22.203 14:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:22.203 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:22.203 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:22.203 14:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:22.203 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:22.203 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:22.203 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:22.203 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:22.203 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:22.203 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:22.203 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:22.461 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:22.461 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:22.461 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:22.461 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:22.461 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:22.461 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:22.461 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:22.461 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:22.461 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:22.461 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:22.461 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:22.461 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:22.461 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:22.461 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:22.461 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:23.028 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:23.028 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:23.028 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:23.028 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:23.028 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:23.028 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:23.029 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:23.029 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:23.029 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:23.029 14:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79095 00:16:23.029 14:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79095 ']' 00:16:23.029 14:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79095 00:16:23.029 14:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:16:23.029 14:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:23.029 14:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79095 00:16:23.029 killing process with pid 79095 00:16:23.029 Received shutdown signal, test time was about 10.874588 seconds 00:16:23.029 00:16:23.029 Latency(us) 00:16:23.029 [2024-11-20T14:27:02.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.029 [2024-11-20T14:27:02.011Z] =================================================================================================================== 00:16:23.029 [2024-11-20T14:27:02.011Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:23.029 14:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:23.029 14:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:23.029 14:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79095' 00:16:23.029 14:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79095 00:16:23.029 14:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79095 00:16:23.029 [2024-11-20 14:27:01.785547] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:23.287 [2024-11-20 14:27:02.157829] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:24.673 ************************************ 00:16:24.673 END TEST raid_rebuild_test_io 00:16:24.673 ************************************ 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:24.673 00:16:24.673 real 0m14.483s 00:16:24.673 user 0m19.157s 00:16:24.673 sys 0m1.817s 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.673 14:27:03 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:16:24.673 14:27:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:24.673 14:27:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.673 14:27:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:24.673 ************************************ 00:16:24.673 START TEST raid_rebuild_test_sb_io 00:16:24.673 ************************************ 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79515 00:16:24.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79515 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79515 ']' 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.673 14:27:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.673 [2024-11-20 14:27:03.433596] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:16:24.674 [2024-11-20 14:27:03.434005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79515 ] 00:16:24.674 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:24.674 Zero copy mechanism will not be used. 00:16:24.674 [2024-11-20 14:27:03.618721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.985 [2024-11-20 14:27:03.758918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.985 [2024-11-20 14:27:03.960042] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.985 [2024-11-20 14:27:03.960126] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.552 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.552 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:16:25.552 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:25.552 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:25.552 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.552 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.552 BaseBdev1_malloc 00:16:25.552 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.552 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:25.552 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.552 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.552 [2024-11-20 14:27:04.510763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:25.552 [2024-11-20 14:27:04.510833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.552 [2024-11-20 14:27:04.510863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:25.552 [2024-11-20 14:27:04.510882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.552 [2024-11-20 14:27:04.513630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.552 [2024-11-20 14:27:04.513676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:25.552 BaseBdev1 00:16:25.552 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.552 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:25.552 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:25.552 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.552 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.810 BaseBdev2_malloc 00:16:25.810 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.810 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:25.810 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.810 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.810 [2024-11-20 14:27:04.562858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:25.810 [2024-11-20 14:27:04.562932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.810 [2024-11-20 14:27:04.562963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:25.811 [2024-11-20 14:27:04.562981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.811 [2024-11-20 14:27:04.565808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.811 [2024-11-20 14:27:04.565853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:25.811 BaseBdev2 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.811 BaseBdev3_malloc 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.811 [2024-11-20 14:27:04.626041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:25.811 [2024-11-20 14:27:04.626107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.811 [2024-11-20 14:27:04.626139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:25.811 [2024-11-20 14:27:04.626158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.811 [2024-11-20 14:27:04.628964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.811 [2024-11-20 14:27:04.629022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:25.811 BaseBdev3 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.811 BaseBdev4_malloc 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.811 [2024-11-20 14:27:04.677899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:25.811 [2024-11-20 14:27:04.677969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.811 [2024-11-20 14:27:04.678014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:25.811 [2024-11-20 14:27:04.678035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.811 [2024-11-20 14:27:04.680752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.811 [2024-11-20 14:27:04.680802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:25.811 BaseBdev4 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.811 spare_malloc 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.811 spare_delay 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.811 [2024-11-20 14:27:04.738071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:25.811 [2024-11-20 14:27:04.738139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.811 [2024-11-20 14:27:04.738165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:25.811 [2024-11-20 14:27:04.738183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.811 [2024-11-20 14:27:04.740911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.811 [2024-11-20 14:27:04.740956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:25.811 spare 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.811 [2024-11-20 14:27:04.746126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.811 [2024-11-20 14:27:04.748549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:25.811 [2024-11-20 14:27:04.748642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:25.811 [2024-11-20 14:27:04.748727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:25.811 [2024-11-20 14:27:04.748966] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:25.811 [2024-11-20 14:27:04.749022] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:25.811 [2024-11-20 14:27:04.749346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:25.811 [2024-11-20 14:27:04.749578] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:25.811 [2024-11-20 14:27:04.749594] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:25.811 [2024-11-20 14:27:04.749778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.811 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.070 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.070 "name": "raid_bdev1", 00:16:26.070 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:26.070 "strip_size_kb": 0, 00:16:26.070 "state": "online", 00:16:26.070 "raid_level": "raid1", 00:16:26.070 "superblock": true, 00:16:26.070 "num_base_bdevs": 4, 00:16:26.070 "num_base_bdevs_discovered": 4, 00:16:26.070 "num_base_bdevs_operational": 4, 00:16:26.070 "base_bdevs_list": [ 00:16:26.070 { 00:16:26.070 "name": "BaseBdev1", 00:16:26.070 "uuid": "1d78199a-710f-597c-aab9-d76856f8c392", 00:16:26.070 "is_configured": true, 00:16:26.070 "data_offset": 2048, 00:16:26.070 "data_size": 63488 00:16:26.070 }, 00:16:26.070 { 00:16:26.070 "name": "BaseBdev2", 00:16:26.070 "uuid": "335b8a24-61b4-5340-be82-a5b59a9b18d4", 00:16:26.070 "is_configured": true, 00:16:26.070 "data_offset": 2048, 00:16:26.070 "data_size": 63488 00:16:26.070 }, 00:16:26.070 { 00:16:26.070 "name": "BaseBdev3", 00:16:26.070 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:26.070 "is_configured": true, 00:16:26.070 "data_offset": 2048, 00:16:26.070 "data_size": 63488 00:16:26.070 }, 00:16:26.070 { 00:16:26.070 "name": "BaseBdev4", 00:16:26.070 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:26.070 "is_configured": true, 00:16:26.070 "data_offset": 2048, 00:16:26.070 "data_size": 63488 00:16:26.070 } 00:16:26.070 ] 00:16:26.070 }' 00:16:26.070 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.070 14:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.328 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:26.328 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:26.328 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.328 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.587 [2024-11-20 14:27:05.310711] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.587 [2024-11-20 14:27:05.414285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.587 "name": "raid_bdev1", 00:16:26.587 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:26.587 "strip_size_kb": 0, 00:16:26.587 "state": "online", 00:16:26.587 "raid_level": "raid1", 00:16:26.587 "superblock": true, 00:16:26.587 "num_base_bdevs": 4, 00:16:26.587 "num_base_bdevs_discovered": 3, 00:16:26.587 "num_base_bdevs_operational": 3, 00:16:26.587 "base_bdevs_list": [ 00:16:26.587 { 00:16:26.587 "name": null, 00:16:26.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.587 "is_configured": false, 00:16:26.587 "data_offset": 0, 00:16:26.587 "data_size": 63488 00:16:26.587 }, 00:16:26.587 { 00:16:26.587 "name": "BaseBdev2", 00:16:26.587 "uuid": "335b8a24-61b4-5340-be82-a5b59a9b18d4", 00:16:26.587 "is_configured": true, 00:16:26.587 "data_offset": 2048, 00:16:26.587 "data_size": 63488 00:16:26.587 }, 00:16:26.587 { 00:16:26.587 "name": "BaseBdev3", 00:16:26.587 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:26.587 "is_configured": true, 00:16:26.587 "data_offset": 2048, 00:16:26.587 "data_size": 63488 00:16:26.587 }, 00:16:26.587 { 00:16:26.587 "name": "BaseBdev4", 00:16:26.587 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:26.587 "is_configured": true, 00:16:26.587 "data_offset": 2048, 00:16:26.587 "data_size": 63488 00:16:26.587 } 00:16:26.587 ] 00:16:26.587 }' 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.587 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.587 [2024-11-20 14:27:05.546526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:26.587 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:26.587 Zero copy mechanism will not be used. 00:16:26.587 Running I/O for 60 seconds... 00:16:27.155 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:27.155 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.155 14:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.155 [2024-11-20 14:27:05.965302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:27.155 14:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.155 14:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:27.155 [2024-11-20 14:27:06.076392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:27.155 [2024-11-20 14:27:06.079054] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:27.413 [2024-11-20 14:27:06.214414] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:27.672 [2024-11-20 14:27:06.480942] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:27.672 [2024-11-20 14:27:06.481814] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:27.932 150.00 IOPS, 450.00 MiB/s [2024-11-20T14:27:06.914Z] [2024-11-20 14:27:06.852481] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:28.192 [2024-11-20 14:27:06.983129] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:28.192 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.192 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.192 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.192 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.192 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.192 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.192 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.192 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.192 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.192 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.192 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.192 "name": "raid_bdev1", 00:16:28.192 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:28.192 "strip_size_kb": 0, 00:16:28.192 "state": "online", 00:16:28.192 "raid_level": "raid1", 00:16:28.192 "superblock": true, 00:16:28.192 "num_base_bdevs": 4, 00:16:28.192 "num_base_bdevs_discovered": 4, 00:16:28.192 "num_base_bdevs_operational": 4, 00:16:28.192 "process": { 00:16:28.192 "type": "rebuild", 00:16:28.192 "target": "spare", 00:16:28.192 "progress": { 00:16:28.192 "blocks": 10240, 00:16:28.192 "percent": 16 00:16:28.192 } 00:16:28.192 }, 00:16:28.192 "base_bdevs_list": [ 00:16:28.192 { 00:16:28.192 "name": "spare", 00:16:28.192 "uuid": "26621ce3-f030-54e4-8aaf-a738f7850d70", 00:16:28.192 "is_configured": true, 00:16:28.192 "data_offset": 2048, 00:16:28.192 "data_size": 63488 00:16:28.192 }, 00:16:28.192 { 00:16:28.192 "name": "BaseBdev2", 00:16:28.192 "uuid": "335b8a24-61b4-5340-be82-a5b59a9b18d4", 00:16:28.192 "is_configured": true, 00:16:28.192 "data_offset": 2048, 00:16:28.192 "data_size": 63488 00:16:28.192 }, 00:16:28.192 { 00:16:28.192 "name": "BaseBdev3", 00:16:28.192 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:28.192 "is_configured": true, 00:16:28.192 "data_offset": 2048, 00:16:28.192 "data_size": 63488 00:16:28.192 }, 00:16:28.192 { 00:16:28.192 "name": "BaseBdev4", 00:16:28.192 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:28.192 "is_configured": true, 00:16:28.192 "data_offset": 2048, 00:16:28.192 "data_size": 63488 00:16:28.192 } 00:16:28.192 ] 00:16:28.192 }' 00:16:28.192 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.192 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.192 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.452 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.452 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:28.452 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.452 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.452 [2024-11-20 14:27:07.188153] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:28.452 [2024-11-20 14:27:07.339817] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:28.452 [2024-11-20 14:27:07.365450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.452 [2024-11-20 14:27:07.365554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:28.452 [2024-11-20 14:27:07.365573] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:28.452 [2024-11-20 14:27:07.400560] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:16:28.452 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.452 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:28.452 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.452 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.452 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.452 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.452 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:28.452 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.452 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.452 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.452 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.452 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.452 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.452 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.452 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.711 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.711 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.711 "name": "raid_bdev1", 00:16:28.711 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:28.711 "strip_size_kb": 0, 00:16:28.711 "state": "online", 00:16:28.711 "raid_level": "raid1", 00:16:28.711 "superblock": true, 00:16:28.711 "num_base_bdevs": 4, 00:16:28.711 "num_base_bdevs_discovered": 3, 00:16:28.711 "num_base_bdevs_operational": 3, 00:16:28.711 "base_bdevs_list": [ 00:16:28.711 { 00:16:28.711 "name": null, 00:16:28.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.711 "is_configured": false, 00:16:28.711 "data_offset": 0, 00:16:28.711 "data_size": 63488 00:16:28.711 }, 00:16:28.711 { 00:16:28.711 "name": "BaseBdev2", 00:16:28.711 "uuid": "335b8a24-61b4-5340-be82-a5b59a9b18d4", 00:16:28.711 "is_configured": true, 00:16:28.711 "data_offset": 2048, 00:16:28.711 "data_size": 63488 00:16:28.711 }, 00:16:28.711 { 00:16:28.711 "name": "BaseBdev3", 00:16:28.711 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:28.711 "is_configured": true, 00:16:28.711 "data_offset": 2048, 00:16:28.711 "data_size": 63488 00:16:28.711 }, 00:16:28.711 { 00:16:28.711 "name": "BaseBdev4", 00:16:28.711 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:28.711 "is_configured": true, 00:16:28.711 "data_offset": 2048, 00:16:28.711 "data_size": 63488 00:16:28.711 } 00:16:28.711 ] 00:16:28.711 }' 00:16:28.711 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.711 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.279 131.00 IOPS, 393.00 MiB/s [2024-11-20T14:27:08.261Z] 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:29.279 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.279 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:29.279 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:29.279 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.279 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.279 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.279 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.279 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.279 14:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.279 14:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.279 "name": "raid_bdev1", 00:16:29.279 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:29.279 "strip_size_kb": 0, 00:16:29.279 "state": "online", 00:16:29.279 "raid_level": "raid1", 00:16:29.279 "superblock": true, 00:16:29.279 "num_base_bdevs": 4, 00:16:29.279 "num_base_bdevs_discovered": 3, 00:16:29.279 "num_base_bdevs_operational": 3, 00:16:29.279 "base_bdevs_list": [ 00:16:29.279 { 00:16:29.279 "name": null, 00:16:29.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.279 "is_configured": false, 00:16:29.279 "data_offset": 0, 00:16:29.279 "data_size": 63488 00:16:29.279 }, 00:16:29.279 { 00:16:29.279 "name": "BaseBdev2", 00:16:29.279 "uuid": "335b8a24-61b4-5340-be82-a5b59a9b18d4", 00:16:29.279 "is_configured": true, 00:16:29.279 "data_offset": 2048, 00:16:29.279 "data_size": 63488 00:16:29.279 }, 00:16:29.279 { 00:16:29.279 "name": "BaseBdev3", 00:16:29.279 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:29.279 "is_configured": true, 00:16:29.279 "data_offset": 2048, 00:16:29.279 "data_size": 63488 00:16:29.279 }, 00:16:29.279 { 00:16:29.279 "name": "BaseBdev4", 00:16:29.279 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:29.279 "is_configured": true, 00:16:29.279 "data_offset": 2048, 00:16:29.279 "data_size": 63488 00:16:29.279 } 00:16:29.279 ] 00:16:29.279 }' 00:16:29.279 14:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.279 14:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:29.279 14:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.279 14:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:29.279 14:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:29.279 14:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.279 14:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.279 [2024-11-20 14:27:08.133579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:29.279 14:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.279 14:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:29.279 [2024-11-20 14:27:08.219877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:29.279 [2024-11-20 14:27:08.222603] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:29.538 [2024-11-20 14:27:08.334938] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:29.538 [2024-11-20 14:27:08.336693] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:29.796 128.00 IOPS, 384.00 MiB/s [2024-11-20T14:27:08.778Z] [2024-11-20 14:27:08.556285] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:29.796 [2024-11-20 14:27:08.557295] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:30.055 [2024-11-20 14:27:08.924174] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:30.314 [2024-11-20 14:27:09.156912] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:30.314 [2024-11-20 14:27:09.158089] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:30.314 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.314 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.314 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.314 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.314 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.314 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.314 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.314 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.314 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.314 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.314 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.314 "name": "raid_bdev1", 00:16:30.314 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:30.314 "strip_size_kb": 0, 00:16:30.314 "state": "online", 00:16:30.314 "raid_level": "raid1", 00:16:30.314 "superblock": true, 00:16:30.314 "num_base_bdevs": 4, 00:16:30.314 "num_base_bdevs_discovered": 4, 00:16:30.314 "num_base_bdevs_operational": 4, 00:16:30.314 "process": { 00:16:30.314 "type": "rebuild", 00:16:30.314 "target": "spare", 00:16:30.314 "progress": { 00:16:30.314 "blocks": 10240, 00:16:30.314 "percent": 16 00:16:30.314 } 00:16:30.314 }, 00:16:30.314 "base_bdevs_list": [ 00:16:30.314 { 00:16:30.314 "name": "spare", 00:16:30.314 "uuid": "26621ce3-f030-54e4-8aaf-a738f7850d70", 00:16:30.314 "is_configured": true, 00:16:30.314 "data_offset": 2048, 00:16:30.314 "data_size": 63488 00:16:30.314 }, 00:16:30.314 { 00:16:30.314 "name": "BaseBdev2", 00:16:30.314 "uuid": "335b8a24-61b4-5340-be82-a5b59a9b18d4", 00:16:30.314 "is_configured": true, 00:16:30.314 "data_offset": 2048, 00:16:30.314 "data_size": 63488 00:16:30.314 }, 00:16:30.314 { 00:16:30.314 "name": "BaseBdev3", 00:16:30.314 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:30.314 "is_configured": true, 00:16:30.314 "data_offset": 2048, 00:16:30.314 "data_size": 63488 00:16:30.314 }, 00:16:30.314 { 00:16:30.314 "name": "BaseBdev4", 00:16:30.314 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:30.314 "is_configured": true, 00:16:30.314 "data_offset": 2048, 00:16:30.314 "data_size": 63488 00:16:30.314 } 00:16:30.314 ] 00:16:30.314 }' 00:16:30.314 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.572 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.572 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.572 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.572 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:30.572 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:30.572 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:30.572 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:30.572 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:30.572 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:30.572 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:30.572 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.572 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.572 [2024-11-20 14:27:09.366109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:30.572 [2024-11-20 14:27:09.510145] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:30.572 [2024-11-20 14:27:09.510453] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:30.572 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.572 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:30.572 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:30.572 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.572 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.572 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.572 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.572 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.572 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.572 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.573 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.573 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.829 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.829 111.25 IOPS, 333.75 MiB/s [2024-11-20T14:27:09.811Z] 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.829 "name": "raid_bdev1", 00:16:30.829 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:30.829 "strip_size_kb": 0, 00:16:30.829 "state": "online", 00:16:30.829 "raid_level": "raid1", 00:16:30.829 "superblock": true, 00:16:30.829 "num_base_bdevs": 4, 00:16:30.829 "num_base_bdevs_discovered": 3, 00:16:30.829 "num_base_bdevs_operational": 3, 00:16:30.829 "process": { 00:16:30.829 "type": "rebuild", 00:16:30.829 "target": "spare", 00:16:30.829 "progress": { 00:16:30.829 "blocks": 12288, 00:16:30.829 "percent": 19 00:16:30.829 } 00:16:30.829 }, 00:16:30.829 "base_bdevs_list": [ 00:16:30.829 { 00:16:30.829 "name": "spare", 00:16:30.829 "uuid": "26621ce3-f030-54e4-8aaf-a738f7850d70", 00:16:30.829 "is_configured": true, 00:16:30.829 "data_offset": 2048, 00:16:30.829 "data_size": 63488 00:16:30.829 }, 00:16:30.829 { 00:16:30.829 "name": null, 00:16:30.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.829 "is_configured": false, 00:16:30.829 "data_offset": 0, 00:16:30.829 "data_size": 63488 00:16:30.829 }, 00:16:30.829 { 00:16:30.829 "name": "BaseBdev3", 00:16:30.829 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:30.829 "is_configured": true, 00:16:30.829 "data_offset": 2048, 00:16:30.829 "data_size": 63488 00:16:30.829 }, 00:16:30.829 { 00:16:30.829 "name": "BaseBdev4", 00:16:30.829 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:30.829 "is_configured": true, 00:16:30.829 "data_offset": 2048, 00:16:30.829 "data_size": 63488 00:16:30.829 } 00:16:30.829 ] 00:16:30.829 }' 00:16:30.829 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.829 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.829 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.829 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.829 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=536 00:16:30.829 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:30.829 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.829 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.829 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.829 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.829 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.829 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.829 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.829 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.829 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.829 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.829 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.829 "name": "raid_bdev1", 00:16:30.829 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:30.829 "strip_size_kb": 0, 00:16:30.829 "state": "online", 00:16:30.829 "raid_level": "raid1", 00:16:30.829 "superblock": true, 00:16:30.829 "num_base_bdevs": 4, 00:16:30.829 "num_base_bdevs_discovered": 3, 00:16:30.829 "num_base_bdevs_operational": 3, 00:16:30.829 "process": { 00:16:30.829 "type": "rebuild", 00:16:30.829 "target": "spare", 00:16:30.829 "progress": { 00:16:30.829 "blocks": 14336, 00:16:30.829 "percent": 22 00:16:30.829 } 00:16:30.829 }, 00:16:30.829 "base_bdevs_list": [ 00:16:30.829 { 00:16:30.829 "name": "spare", 00:16:30.829 "uuid": "26621ce3-f030-54e4-8aaf-a738f7850d70", 00:16:30.829 "is_configured": true, 00:16:30.829 "data_offset": 2048, 00:16:30.829 "data_size": 63488 00:16:30.829 }, 00:16:30.829 { 00:16:30.829 "name": null, 00:16:30.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.829 "is_configured": false, 00:16:30.829 "data_offset": 0, 00:16:30.829 "data_size": 63488 00:16:30.829 }, 00:16:30.829 { 00:16:30.830 "name": "BaseBdev3", 00:16:30.830 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:30.830 "is_configured": true, 00:16:30.830 "data_offset": 2048, 00:16:30.830 "data_size": 63488 00:16:30.830 }, 00:16:30.830 { 00:16:30.830 "name": "BaseBdev4", 00:16:30.830 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:30.830 "is_configured": true, 00:16:30.830 "data_offset": 2048, 00:16:30.830 "data_size": 63488 00:16:30.830 } 00:16:30.830 ] 00:16:30.830 }' 00:16:30.830 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.830 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.830 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.830 [2024-11-20 14:27:09.783519] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:30.830 [2024-11-20 14:27:09.784261] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:31.087 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.087 14:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:31.345 [2024-11-20 14:27:10.157657] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:31.604 [2024-11-20 14:27:10.376618] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:31.863 100.60 IOPS, 301.80 MiB/s [2024-11-20T14:27:10.845Z] [2024-11-20 14:27:10.616936] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:31.863 [2024-11-20 14:27:10.738813] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:31.863 14:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:31.863 14:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.863 14:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.863 14:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.863 14:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.863 14:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.863 14:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.863 14:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.863 14:27:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.863 14:27:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.121 14:27:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.121 14:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.121 "name": "raid_bdev1", 00:16:32.121 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:32.122 "strip_size_kb": 0, 00:16:32.122 "state": "online", 00:16:32.122 "raid_level": "raid1", 00:16:32.122 "superblock": true, 00:16:32.122 "num_base_bdevs": 4, 00:16:32.122 "num_base_bdevs_discovered": 3, 00:16:32.122 "num_base_bdevs_operational": 3, 00:16:32.122 "process": { 00:16:32.122 "type": "rebuild", 00:16:32.122 "target": "spare", 00:16:32.122 "progress": { 00:16:32.122 "blocks": 28672, 00:16:32.122 "percent": 45 00:16:32.122 } 00:16:32.122 }, 00:16:32.122 "base_bdevs_list": [ 00:16:32.122 { 00:16:32.122 "name": "spare", 00:16:32.122 "uuid": "26621ce3-f030-54e4-8aaf-a738f7850d70", 00:16:32.122 "is_configured": true, 00:16:32.122 "data_offset": 2048, 00:16:32.122 "data_size": 63488 00:16:32.122 }, 00:16:32.122 { 00:16:32.122 "name": null, 00:16:32.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.122 "is_configured": false, 00:16:32.122 "data_offset": 0, 00:16:32.122 "data_size": 63488 00:16:32.122 }, 00:16:32.122 { 00:16:32.122 "name": "BaseBdev3", 00:16:32.122 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:32.122 "is_configured": true, 00:16:32.122 "data_offset": 2048, 00:16:32.122 "data_size": 63488 00:16:32.122 }, 00:16:32.122 { 00:16:32.122 "name": "BaseBdev4", 00:16:32.122 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:32.122 "is_configured": true, 00:16:32.122 "data_offset": 2048, 00:16:32.122 "data_size": 63488 00:16:32.122 } 00:16:32.122 ] 00:16:32.122 }' 00:16:32.122 14:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.122 14:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.122 14:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.122 14:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.122 14:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:32.381 [2024-11-20 14:27:11.109339] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:16:32.381 [2024-11-20 14:27:11.233170] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:32.639 93.00 IOPS, 279.00 MiB/s [2024-11-20T14:27:11.621Z] [2024-11-20 14:27:11.584830] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:32.897 [2024-11-20 14:27:11.817581] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:16:33.156 [2024-11-20 14:27:11.938310] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:16:33.156 14:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:33.156 14:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.156 14:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.156 14:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.156 14:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.156 14:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.156 14:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.156 14:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.156 14:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.156 14:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.156 14:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.156 14:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.156 "name": "raid_bdev1", 00:16:33.156 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:33.156 "strip_size_kb": 0, 00:16:33.156 "state": "online", 00:16:33.156 "raid_level": "raid1", 00:16:33.156 "superblock": true, 00:16:33.156 "num_base_bdevs": 4, 00:16:33.156 "num_base_bdevs_discovered": 3, 00:16:33.156 "num_base_bdevs_operational": 3, 00:16:33.156 "process": { 00:16:33.156 "type": "rebuild", 00:16:33.156 "target": "spare", 00:16:33.156 "progress": { 00:16:33.156 "blocks": 47104, 00:16:33.156 "percent": 74 00:16:33.156 } 00:16:33.156 }, 00:16:33.156 "base_bdevs_list": [ 00:16:33.156 { 00:16:33.156 "name": "spare", 00:16:33.156 "uuid": "26621ce3-f030-54e4-8aaf-a738f7850d70", 00:16:33.156 "is_configured": true, 00:16:33.156 "data_offset": 2048, 00:16:33.156 "data_size": 63488 00:16:33.156 }, 00:16:33.156 { 00:16:33.156 "name": null, 00:16:33.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.156 "is_configured": false, 00:16:33.156 "data_offset": 0, 00:16:33.156 "data_size": 63488 00:16:33.156 }, 00:16:33.156 { 00:16:33.156 "name": "BaseBdev3", 00:16:33.156 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:33.156 "is_configured": true, 00:16:33.156 "data_offset": 2048, 00:16:33.156 "data_size": 63488 00:16:33.156 }, 00:16:33.156 { 00:16:33.156 "name": "BaseBdev4", 00:16:33.156 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:33.156 "is_configured": true, 00:16:33.156 "data_offset": 2048, 00:16:33.156 "data_size": 63488 00:16:33.156 } 00:16:33.156 ] 00:16:33.156 }' 00:16:33.156 14:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.156 14:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.156 14:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.415 14:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.415 14:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:33.415 [2024-11-20 14:27:12.254605] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:16:33.415 [2024-11-20 14:27:12.363806] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:16:34.032 84.57 IOPS, 253.71 MiB/s [2024-11-20T14:27:13.014Z] [2024-11-20 14:27:12.817005] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:16:34.292 [2024-11-20 14:27:13.167449] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:34.292 14:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:34.292 14:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.292 14:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.292 14:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.292 14:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.292 14:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.292 14:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.292 14:27:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.292 14:27:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.292 14:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.292 14:27:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.292 14:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.292 "name": "raid_bdev1", 00:16:34.292 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:34.292 "strip_size_kb": 0, 00:16:34.292 "state": "online", 00:16:34.292 "raid_level": "raid1", 00:16:34.293 "superblock": true, 00:16:34.293 "num_base_bdevs": 4, 00:16:34.293 "num_base_bdevs_discovered": 3, 00:16:34.293 "num_base_bdevs_operational": 3, 00:16:34.293 "process": { 00:16:34.293 "type": "rebuild", 00:16:34.293 "target": "spare", 00:16:34.293 "progress": { 00:16:34.293 "blocks": 63488, 00:16:34.293 "percent": 100 00:16:34.293 } 00:16:34.293 }, 00:16:34.293 "base_bdevs_list": [ 00:16:34.293 { 00:16:34.293 "name": "spare", 00:16:34.293 "uuid": "26621ce3-f030-54e4-8aaf-a738f7850d70", 00:16:34.293 "is_configured": true, 00:16:34.293 "data_offset": 2048, 00:16:34.293 "data_size": 63488 00:16:34.293 }, 00:16:34.293 { 00:16:34.293 "name": null, 00:16:34.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.293 "is_configured": false, 00:16:34.293 "data_offset": 0, 00:16:34.293 "data_size": 63488 00:16:34.293 }, 00:16:34.293 { 00:16:34.293 "name": "BaseBdev3", 00:16:34.293 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:34.293 "is_configured": true, 00:16:34.293 "data_offset": 2048, 00:16:34.293 "data_size": 63488 00:16:34.293 }, 00:16:34.293 { 00:16:34.293 "name": "BaseBdev4", 00:16:34.293 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:34.293 "is_configured": true, 00:16:34.293 "data_offset": 2048, 00:16:34.293 "data_size": 63488 00:16:34.293 } 00:16:34.293 ] 00:16:34.293 }' 00:16:34.293 14:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.293 [2024-11-20 14:27:13.267434] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:34.552 [2024-11-20 14:27:13.281322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.552 14:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.552 14:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.552 14:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.552 14:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:35.379 77.38 IOPS, 232.12 MiB/s [2024-11-20T14:27:14.361Z] 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.379 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.379 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.379 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.379 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.379 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.379 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.379 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.379 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.379 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.379 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.638 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.638 "name": "raid_bdev1", 00:16:35.638 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:35.638 "strip_size_kb": 0, 00:16:35.638 "state": "online", 00:16:35.638 "raid_level": "raid1", 00:16:35.638 "superblock": true, 00:16:35.638 "num_base_bdevs": 4, 00:16:35.638 "num_base_bdevs_discovered": 3, 00:16:35.638 "num_base_bdevs_operational": 3, 00:16:35.638 "base_bdevs_list": [ 00:16:35.638 { 00:16:35.638 "name": "spare", 00:16:35.638 "uuid": "26621ce3-f030-54e4-8aaf-a738f7850d70", 00:16:35.638 "is_configured": true, 00:16:35.638 "data_offset": 2048, 00:16:35.638 "data_size": 63488 00:16:35.638 }, 00:16:35.638 { 00:16:35.638 "name": null, 00:16:35.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.638 "is_configured": false, 00:16:35.638 "data_offset": 0, 00:16:35.638 "data_size": 63488 00:16:35.638 }, 00:16:35.638 { 00:16:35.638 "name": "BaseBdev3", 00:16:35.638 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:35.638 "is_configured": true, 00:16:35.638 "data_offset": 2048, 00:16:35.638 "data_size": 63488 00:16:35.638 }, 00:16:35.638 { 00:16:35.638 "name": "BaseBdev4", 00:16:35.638 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:35.638 "is_configured": true, 00:16:35.638 "data_offset": 2048, 00:16:35.638 "data_size": 63488 00:16:35.638 } 00:16:35.638 ] 00:16:35.638 }' 00:16:35.638 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.638 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:35.638 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.638 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:35.638 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:35.638 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:35.638 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.638 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:35.638 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:35.638 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.638 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.639 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.639 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.639 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.639 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.639 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.639 "name": "raid_bdev1", 00:16:35.639 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:35.639 "strip_size_kb": 0, 00:16:35.639 "state": "online", 00:16:35.639 "raid_level": "raid1", 00:16:35.639 "superblock": true, 00:16:35.639 "num_base_bdevs": 4, 00:16:35.639 "num_base_bdevs_discovered": 3, 00:16:35.639 "num_base_bdevs_operational": 3, 00:16:35.639 "base_bdevs_list": [ 00:16:35.639 { 00:16:35.639 "name": "spare", 00:16:35.639 "uuid": "26621ce3-f030-54e4-8aaf-a738f7850d70", 00:16:35.639 "is_configured": true, 00:16:35.639 "data_offset": 2048, 00:16:35.639 "data_size": 63488 00:16:35.639 }, 00:16:35.639 { 00:16:35.639 "name": null, 00:16:35.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.639 "is_configured": false, 00:16:35.639 "data_offset": 0, 00:16:35.639 "data_size": 63488 00:16:35.639 }, 00:16:35.639 { 00:16:35.639 "name": "BaseBdev3", 00:16:35.639 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:35.639 "is_configured": true, 00:16:35.639 "data_offset": 2048, 00:16:35.639 "data_size": 63488 00:16:35.639 }, 00:16:35.639 { 00:16:35.639 "name": "BaseBdev4", 00:16:35.639 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:35.639 "is_configured": true, 00:16:35.639 "data_offset": 2048, 00:16:35.639 "data_size": 63488 00:16:35.639 } 00:16:35.639 ] 00:16:35.639 }' 00:16:35.639 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.639 72.11 IOPS, 216.33 MiB/s [2024-11-20T14:27:14.621Z] 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:35.639 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.898 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:35.899 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:35.899 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.899 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.899 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.899 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.899 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.899 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.899 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.899 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.899 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.899 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.899 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.899 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.899 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.899 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.899 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.899 "name": "raid_bdev1", 00:16:35.899 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:35.899 "strip_size_kb": 0, 00:16:35.899 "state": "online", 00:16:35.899 "raid_level": "raid1", 00:16:35.899 "superblock": true, 00:16:35.899 "num_base_bdevs": 4, 00:16:35.899 "num_base_bdevs_discovered": 3, 00:16:35.899 "num_base_bdevs_operational": 3, 00:16:35.899 "base_bdevs_list": [ 00:16:35.899 { 00:16:35.899 "name": "spare", 00:16:35.899 "uuid": "26621ce3-f030-54e4-8aaf-a738f7850d70", 00:16:35.899 "is_configured": true, 00:16:35.899 "data_offset": 2048, 00:16:35.899 "data_size": 63488 00:16:35.899 }, 00:16:35.899 { 00:16:35.899 "name": null, 00:16:35.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.899 "is_configured": false, 00:16:35.899 "data_offset": 0, 00:16:35.899 "data_size": 63488 00:16:35.899 }, 00:16:35.899 { 00:16:35.899 "name": "BaseBdev3", 00:16:35.899 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:35.899 "is_configured": true, 00:16:35.899 "data_offset": 2048, 00:16:35.899 "data_size": 63488 00:16:35.899 }, 00:16:35.899 { 00:16:35.899 "name": "BaseBdev4", 00:16:35.899 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:35.899 "is_configured": true, 00:16:35.899 "data_offset": 2048, 00:16:35.899 "data_size": 63488 00:16:35.899 } 00:16:35.899 ] 00:16:35.899 }' 00:16:35.899 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.899 14:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.467 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:36.467 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.467 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.467 [2024-11-20 14:27:15.143616] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:36.467 [2024-11-20 14:27:15.143785] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:36.467 00:16:36.467 Latency(us) 00:16:36.467 [2024-11-20T14:27:15.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.467 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:36.467 raid_bdev1 : 9.62 69.63 208.89 0.00 0.00 19047.42 294.17 123922.62 00:16:36.467 [2024-11-20T14:27:15.449Z] =================================================================================================================== 00:16:36.467 [2024-11-20T14:27:15.449Z] Total : 69.63 208.89 0.00 0.00 19047.42 294.17 123922.62 00:16:36.467 [2024-11-20 14:27:15.191254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.467 [2024-11-20 14:27:15.191476] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.467 [2024-11-20 14:27:15.191657] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:16:36.467 "results": [ 00:16:36.467 { 00:16:36.467 "job": "raid_bdev1", 00:16:36.467 "core_mask": "0x1", 00:16:36.467 "workload": "randrw", 00:16:36.467 "percentage": 50, 00:16:36.467 "status": "finished", 00:16:36.467 "queue_depth": 2, 00:16:36.467 "io_size": 3145728, 00:16:36.467 "runtime": 9.622239, 00:16:36.467 "iops": 69.63036357754157, 00:16:36.467 "mibps": 208.89109073262472, 00:16:36.467 "io_failed": 0, 00:16:36.467 "io_timeout": 0, 00:16:36.467 "avg_latency_us": 19047.417052917233, 00:16:36.467 "min_latency_us": 294.16727272727275, 00:16:36.467 "max_latency_us": 123922.61818181818 00:16:36.467 } 00:16:36.467 ], 00:16:36.467 "core_count": 1 00:16:36.467 } 00:16:36.467 ee all in destruct 00:16:36.467 [2024-11-20 14:27:15.191859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:36.467 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.467 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.467 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.467 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:36.467 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.467 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.467 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:36.467 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:36.467 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:36.467 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:36.467 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:36.467 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:36.467 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:36.467 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:36.467 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:36.467 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:36.467 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:36.467 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:36.467 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:36.726 /dev/nbd0 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:36.726 1+0 records in 00:16:36.726 1+0 records out 00:16:36.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385863 s, 10.6 MB/s 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:36.726 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:36.985 /dev/nbd1 00:16:36.985 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:36.985 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:36.985 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:36.985 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:36.985 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:36.985 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:36.985 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:36.985 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:36.985 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:36.986 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:36.986 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:36.986 1+0 records in 00:16:36.986 1+0 records out 00:16:36.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445527 s, 9.2 MB/s 00:16:36.986 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.986 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:36.986 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.986 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:36.986 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:36.986 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:36.986 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:36.986 14:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:37.244 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:37.244 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:37.244 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:37.244 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:37.244 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:37.244 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:37.244 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:37.503 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:37.503 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:37.503 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:37.503 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:37.503 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:37.503 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:37.503 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:37.503 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:37.503 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:37.503 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:37.503 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:37.503 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:37.503 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:37.503 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:37.503 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:37.503 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:37.503 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:37.503 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:37.503 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:37.503 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:37.762 /dev/nbd1 00:16:37.762 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:37.762 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:37.762 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:37.762 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:37.762 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:37.762 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:37.762 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:37.762 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:37.762 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:37.762 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:37.762 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:37.762 1+0 records in 00:16:37.762 1+0 records out 00:16:37.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000660633 s, 6.2 MB/s 00:16:37.762 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:37.762 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:37.762 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:37.762 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:37.762 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:37.762 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:37.762 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:37.762 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:38.022 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:38.022 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:38.022 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:38.022 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:38.022 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:38.022 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:38.022 14:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:38.305 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:38.305 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:38.305 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:38.305 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:38.305 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:38.305 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:38.305 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:38.305 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:38.305 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:38.305 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:38.305 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:38.305 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:38.305 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:38.305 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:38.305 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:38.565 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:38.565 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:38.565 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:38.565 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:38.565 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:38.565 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:38.565 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:38.565 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:38.565 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:38.565 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:38.565 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.565 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.565 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.565 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:38.565 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.565 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.565 [2024-11-20 14:27:17.509121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:38.565 [2024-11-20 14:27:17.509194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.565 [2024-11-20 14:27:17.509225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:38.565 [2024-11-20 14:27:17.509243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.565 [2024-11-20 14:27:17.512198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.565 [2024-11-20 14:27:17.512241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:38.565 [2024-11-20 14:27:17.512350] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:38.565 [2024-11-20 14:27:17.512421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:38.565 [2024-11-20 14:27:17.512597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:38.565 [2024-11-20 14:27:17.512737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:38.565 spare 00:16:38.565 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.565 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:38.565 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.565 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.824 [2024-11-20 14:27:17.612867] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:38.824 [2024-11-20 14:27:17.612936] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:38.824 [2024-11-20 14:27:17.613410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:16:38.824 [2024-11-20 14:27:17.613670] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:38.824 [2024-11-20 14:27:17.613687] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:38.824 [2024-11-20 14:27:17.613944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.824 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.824 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:38.824 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.824 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.824 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.824 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.824 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.824 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.824 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.824 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.824 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.824 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.824 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.824 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.824 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.824 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.824 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.824 "name": "raid_bdev1", 00:16:38.824 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:38.824 "strip_size_kb": 0, 00:16:38.824 "state": "online", 00:16:38.824 "raid_level": "raid1", 00:16:38.824 "superblock": true, 00:16:38.824 "num_base_bdevs": 4, 00:16:38.824 "num_base_bdevs_discovered": 3, 00:16:38.824 "num_base_bdevs_operational": 3, 00:16:38.824 "base_bdevs_list": [ 00:16:38.824 { 00:16:38.824 "name": "spare", 00:16:38.824 "uuid": "26621ce3-f030-54e4-8aaf-a738f7850d70", 00:16:38.824 "is_configured": true, 00:16:38.824 "data_offset": 2048, 00:16:38.824 "data_size": 63488 00:16:38.824 }, 00:16:38.824 { 00:16:38.824 "name": null, 00:16:38.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.824 "is_configured": false, 00:16:38.824 "data_offset": 2048, 00:16:38.824 "data_size": 63488 00:16:38.824 }, 00:16:38.824 { 00:16:38.824 "name": "BaseBdev3", 00:16:38.824 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:38.824 "is_configured": true, 00:16:38.824 "data_offset": 2048, 00:16:38.824 "data_size": 63488 00:16:38.824 }, 00:16:38.824 { 00:16:38.824 "name": "BaseBdev4", 00:16:38.824 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:38.824 "is_configured": true, 00:16:38.824 "data_offset": 2048, 00:16:38.824 "data_size": 63488 00:16:38.824 } 00:16:38.824 ] 00:16:38.824 }' 00:16:38.824 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.824 14:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.392 "name": "raid_bdev1", 00:16:39.392 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:39.392 "strip_size_kb": 0, 00:16:39.392 "state": "online", 00:16:39.392 "raid_level": "raid1", 00:16:39.392 "superblock": true, 00:16:39.392 "num_base_bdevs": 4, 00:16:39.392 "num_base_bdevs_discovered": 3, 00:16:39.392 "num_base_bdevs_operational": 3, 00:16:39.392 "base_bdevs_list": [ 00:16:39.392 { 00:16:39.392 "name": "spare", 00:16:39.392 "uuid": "26621ce3-f030-54e4-8aaf-a738f7850d70", 00:16:39.392 "is_configured": true, 00:16:39.392 "data_offset": 2048, 00:16:39.392 "data_size": 63488 00:16:39.392 }, 00:16:39.392 { 00:16:39.392 "name": null, 00:16:39.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.392 "is_configured": false, 00:16:39.392 "data_offset": 2048, 00:16:39.392 "data_size": 63488 00:16:39.392 }, 00:16:39.392 { 00:16:39.392 "name": "BaseBdev3", 00:16:39.392 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:39.392 "is_configured": true, 00:16:39.392 "data_offset": 2048, 00:16:39.392 "data_size": 63488 00:16:39.392 }, 00:16:39.392 { 00:16:39.392 "name": "BaseBdev4", 00:16:39.392 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:39.392 "is_configured": true, 00:16:39.392 "data_offset": 2048, 00:16:39.392 "data_size": 63488 00:16:39.392 } 00:16:39.392 ] 00:16:39.392 }' 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.392 [2024-11-20 14:27:18.322182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.392 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.393 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:39.393 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.393 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.393 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.393 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.393 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.393 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.393 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.393 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.393 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.651 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.651 "name": "raid_bdev1", 00:16:39.651 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:39.651 "strip_size_kb": 0, 00:16:39.651 "state": "online", 00:16:39.651 "raid_level": "raid1", 00:16:39.651 "superblock": true, 00:16:39.651 "num_base_bdevs": 4, 00:16:39.651 "num_base_bdevs_discovered": 2, 00:16:39.651 "num_base_bdevs_operational": 2, 00:16:39.651 "base_bdevs_list": [ 00:16:39.651 { 00:16:39.651 "name": null, 00:16:39.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.651 "is_configured": false, 00:16:39.651 "data_offset": 0, 00:16:39.651 "data_size": 63488 00:16:39.651 }, 00:16:39.651 { 00:16:39.651 "name": null, 00:16:39.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.651 "is_configured": false, 00:16:39.651 "data_offset": 2048, 00:16:39.651 "data_size": 63488 00:16:39.651 }, 00:16:39.651 { 00:16:39.651 "name": "BaseBdev3", 00:16:39.651 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:39.651 "is_configured": true, 00:16:39.651 "data_offset": 2048, 00:16:39.651 "data_size": 63488 00:16:39.651 }, 00:16:39.651 { 00:16:39.651 "name": "BaseBdev4", 00:16:39.651 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:39.651 "is_configured": true, 00:16:39.651 "data_offset": 2048, 00:16:39.651 "data_size": 63488 00:16:39.651 } 00:16:39.651 ] 00:16:39.651 }' 00:16:39.651 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.651 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.909 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:39.909 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.909 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.909 [2024-11-20 14:27:18.850435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:39.909 [2024-11-20 14:27:18.850807] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:39.909 [2024-11-20 14:27:18.850845] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:39.909 [2024-11-20 14:27:18.850902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:39.909 [2024-11-20 14:27:18.864562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:16:39.909 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.909 14:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:39.909 [2024-11-20 14:27:18.867050] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:41.287 14:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:41.287 14:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.287 14:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:41.287 14:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:41.287 14:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.287 14:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.287 14:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.287 14:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.287 14:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.287 14:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.287 14:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.287 "name": "raid_bdev1", 00:16:41.287 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:41.287 "strip_size_kb": 0, 00:16:41.287 "state": "online", 00:16:41.287 "raid_level": "raid1", 00:16:41.287 "superblock": true, 00:16:41.287 "num_base_bdevs": 4, 00:16:41.287 "num_base_bdevs_discovered": 3, 00:16:41.287 "num_base_bdevs_operational": 3, 00:16:41.287 "process": { 00:16:41.287 "type": "rebuild", 00:16:41.287 "target": "spare", 00:16:41.287 "progress": { 00:16:41.287 "blocks": 20480, 00:16:41.287 "percent": 32 00:16:41.287 } 00:16:41.287 }, 00:16:41.287 "base_bdevs_list": [ 00:16:41.287 { 00:16:41.287 "name": "spare", 00:16:41.287 "uuid": "26621ce3-f030-54e4-8aaf-a738f7850d70", 00:16:41.287 "is_configured": true, 00:16:41.287 "data_offset": 2048, 00:16:41.287 "data_size": 63488 00:16:41.287 }, 00:16:41.287 { 00:16:41.287 "name": null, 00:16:41.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.287 "is_configured": false, 00:16:41.287 "data_offset": 2048, 00:16:41.287 "data_size": 63488 00:16:41.287 }, 00:16:41.287 { 00:16:41.287 "name": "BaseBdev3", 00:16:41.287 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:41.287 "is_configured": true, 00:16:41.287 "data_offset": 2048, 00:16:41.287 "data_size": 63488 00:16:41.287 }, 00:16:41.287 { 00:16:41.287 "name": "BaseBdev4", 00:16:41.287 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:41.287 "is_configured": true, 00:16:41.287 "data_offset": 2048, 00:16:41.287 "data_size": 63488 00:16:41.287 } 00:16:41.287 ] 00:16:41.287 }' 00:16:41.287 14:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.287 14:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:41.287 14:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.287 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:41.287 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:41.287 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.287 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.287 [2024-11-20 14:27:20.028752] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:41.287 [2024-11-20 14:27:20.075938] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:41.287 [2024-11-20 14:27:20.076077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.287 [2024-11-20 14:27:20.076104] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:41.287 [2024-11-20 14:27:20.076119] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:41.287 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.287 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:41.287 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.287 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.287 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.287 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.287 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:41.287 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.287 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.287 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.287 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.287 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.287 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.287 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.287 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.287 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.287 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.287 "name": "raid_bdev1", 00:16:41.287 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:41.287 "strip_size_kb": 0, 00:16:41.287 "state": "online", 00:16:41.287 "raid_level": "raid1", 00:16:41.287 "superblock": true, 00:16:41.287 "num_base_bdevs": 4, 00:16:41.287 "num_base_bdevs_discovered": 2, 00:16:41.287 "num_base_bdevs_operational": 2, 00:16:41.287 "base_bdevs_list": [ 00:16:41.287 { 00:16:41.287 "name": null, 00:16:41.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.287 "is_configured": false, 00:16:41.287 "data_offset": 0, 00:16:41.287 "data_size": 63488 00:16:41.287 }, 00:16:41.287 { 00:16:41.287 "name": null, 00:16:41.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.287 "is_configured": false, 00:16:41.287 "data_offset": 2048, 00:16:41.287 "data_size": 63488 00:16:41.287 }, 00:16:41.287 { 00:16:41.287 "name": "BaseBdev3", 00:16:41.287 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:41.287 "is_configured": true, 00:16:41.287 "data_offset": 2048, 00:16:41.287 "data_size": 63488 00:16:41.287 }, 00:16:41.287 { 00:16:41.287 "name": "BaseBdev4", 00:16:41.287 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:41.287 "is_configured": true, 00:16:41.287 "data_offset": 2048, 00:16:41.287 "data_size": 63488 00:16:41.287 } 00:16:41.287 ] 00:16:41.287 }' 00:16:41.288 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.288 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.856 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:41.856 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.856 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.856 [2024-11-20 14:27:20.615396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:41.856 [2024-11-20 14:27:20.615476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.856 [2024-11-20 14:27:20.615517] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:41.856 [2024-11-20 14:27:20.615537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.856 [2024-11-20 14:27:20.616152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.856 [2024-11-20 14:27:20.616189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:41.856 [2024-11-20 14:27:20.616305] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:41.856 [2024-11-20 14:27:20.616332] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:41.856 [2024-11-20 14:27:20.616346] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:41.856 [2024-11-20 14:27:20.616378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:41.856 [2024-11-20 14:27:20.630187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:16:41.856 spare 00:16:41.856 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.856 14:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:41.856 [2024-11-20 14:27:20.632845] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:42.793 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.793 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.793 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.793 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.793 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.793 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.793 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.793 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.793 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:42.793 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.793 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.793 "name": "raid_bdev1", 00:16:42.793 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:42.793 "strip_size_kb": 0, 00:16:42.793 "state": "online", 00:16:42.793 "raid_level": "raid1", 00:16:42.793 "superblock": true, 00:16:42.793 "num_base_bdevs": 4, 00:16:42.793 "num_base_bdevs_discovered": 3, 00:16:42.793 "num_base_bdevs_operational": 3, 00:16:42.793 "process": { 00:16:42.793 "type": "rebuild", 00:16:42.793 "target": "spare", 00:16:42.793 "progress": { 00:16:42.793 "blocks": 20480, 00:16:42.793 "percent": 32 00:16:42.793 } 00:16:42.793 }, 00:16:42.793 "base_bdevs_list": [ 00:16:42.793 { 00:16:42.793 "name": "spare", 00:16:42.793 "uuid": "26621ce3-f030-54e4-8aaf-a738f7850d70", 00:16:42.793 "is_configured": true, 00:16:42.793 "data_offset": 2048, 00:16:42.793 "data_size": 63488 00:16:42.793 }, 00:16:42.793 { 00:16:42.793 "name": null, 00:16:42.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.793 "is_configured": false, 00:16:42.793 "data_offset": 2048, 00:16:42.793 "data_size": 63488 00:16:42.793 }, 00:16:42.793 { 00:16:42.793 "name": "BaseBdev3", 00:16:42.793 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:42.793 "is_configured": true, 00:16:42.793 "data_offset": 2048, 00:16:42.793 "data_size": 63488 00:16:42.793 }, 00:16:42.793 { 00:16:42.793 "name": "BaseBdev4", 00:16:42.793 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:42.793 "is_configured": true, 00:16:42.793 "data_offset": 2048, 00:16:42.793 "data_size": 63488 00:16:42.793 } 00:16:42.793 ] 00:16:42.793 }' 00:16:42.793 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.793 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.793 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.052 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.052 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:43.052 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.052 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:43.052 [2024-11-20 14:27:21.798084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:43.052 [2024-11-20 14:27:21.841935] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:43.052 [2024-11-20 14:27:21.842042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.052 [2024-11-20 14:27:21.842073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:43.052 [2024-11-20 14:27:21.842084] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:43.052 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.052 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:43.052 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.052 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.052 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.052 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.052 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:43.052 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.052 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.052 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.052 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.052 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.052 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.052 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:43.052 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.052 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.052 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.052 "name": "raid_bdev1", 00:16:43.052 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:43.052 "strip_size_kb": 0, 00:16:43.052 "state": "online", 00:16:43.052 "raid_level": "raid1", 00:16:43.052 "superblock": true, 00:16:43.052 "num_base_bdevs": 4, 00:16:43.052 "num_base_bdevs_discovered": 2, 00:16:43.052 "num_base_bdevs_operational": 2, 00:16:43.052 "base_bdevs_list": [ 00:16:43.052 { 00:16:43.052 "name": null, 00:16:43.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.052 "is_configured": false, 00:16:43.052 "data_offset": 0, 00:16:43.052 "data_size": 63488 00:16:43.052 }, 00:16:43.053 { 00:16:43.053 "name": null, 00:16:43.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.053 "is_configured": false, 00:16:43.053 "data_offset": 2048, 00:16:43.053 "data_size": 63488 00:16:43.053 }, 00:16:43.053 { 00:16:43.053 "name": "BaseBdev3", 00:16:43.053 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:43.053 "is_configured": true, 00:16:43.053 "data_offset": 2048, 00:16:43.053 "data_size": 63488 00:16:43.053 }, 00:16:43.053 { 00:16:43.053 "name": "BaseBdev4", 00:16:43.053 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:43.053 "is_configured": true, 00:16:43.053 "data_offset": 2048, 00:16:43.053 "data_size": 63488 00:16:43.053 } 00:16:43.053 ] 00:16:43.053 }' 00:16:43.053 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.053 14:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:43.620 14:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:43.620 14:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.620 14:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:43.620 14:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:43.620 14:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.620 14:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.620 14:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.620 14:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:43.620 14:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.620 14:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.620 14:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.621 "name": "raid_bdev1", 00:16:43.621 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:43.621 "strip_size_kb": 0, 00:16:43.621 "state": "online", 00:16:43.621 "raid_level": "raid1", 00:16:43.621 "superblock": true, 00:16:43.621 "num_base_bdevs": 4, 00:16:43.621 "num_base_bdevs_discovered": 2, 00:16:43.621 "num_base_bdevs_operational": 2, 00:16:43.621 "base_bdevs_list": [ 00:16:43.621 { 00:16:43.621 "name": null, 00:16:43.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.621 "is_configured": false, 00:16:43.621 "data_offset": 0, 00:16:43.621 "data_size": 63488 00:16:43.621 }, 00:16:43.621 { 00:16:43.621 "name": null, 00:16:43.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.621 "is_configured": false, 00:16:43.621 "data_offset": 2048, 00:16:43.621 "data_size": 63488 00:16:43.621 }, 00:16:43.621 { 00:16:43.621 "name": "BaseBdev3", 00:16:43.621 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:43.621 "is_configured": true, 00:16:43.621 "data_offset": 2048, 00:16:43.621 "data_size": 63488 00:16:43.621 }, 00:16:43.621 { 00:16:43.621 "name": "BaseBdev4", 00:16:43.621 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:43.621 "is_configured": true, 00:16:43.621 "data_offset": 2048, 00:16:43.621 "data_size": 63488 00:16:43.621 } 00:16:43.621 ] 00:16:43.621 }' 00:16:43.621 14:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.621 14:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:43.621 14:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.621 14:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:43.621 14:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:43.621 14:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.621 14:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:43.621 14:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.621 14:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:43.621 14:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.621 14:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:43.621 [2024-11-20 14:27:22.557767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:43.621 [2024-11-20 14:27:22.557845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.621 [2024-11-20 14:27:22.557878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:16:43.621 [2024-11-20 14:27:22.557893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.621 [2024-11-20 14:27:22.558504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.621 [2024-11-20 14:27:22.558538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:43.621 [2024-11-20 14:27:22.558639] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:43.621 [2024-11-20 14:27:22.558667] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:43.621 [2024-11-20 14:27:22.558682] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:43.621 [2024-11-20 14:27:22.558695] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:43.621 BaseBdev1 00:16:43.621 14:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.621 14:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:44.997 14:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:44.997 14:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.997 14:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.997 14:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.997 14:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.997 14:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:44.997 14:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.997 14:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.997 14:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.997 14:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.997 14:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.997 14:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.997 14:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.997 14:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.997 14:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.997 14:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.997 "name": "raid_bdev1", 00:16:44.997 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:44.997 "strip_size_kb": 0, 00:16:44.997 "state": "online", 00:16:44.997 "raid_level": "raid1", 00:16:44.997 "superblock": true, 00:16:44.997 "num_base_bdevs": 4, 00:16:44.997 "num_base_bdevs_discovered": 2, 00:16:44.997 "num_base_bdevs_operational": 2, 00:16:44.997 "base_bdevs_list": [ 00:16:44.997 { 00:16:44.997 "name": null, 00:16:44.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.997 "is_configured": false, 00:16:44.997 "data_offset": 0, 00:16:44.997 "data_size": 63488 00:16:44.997 }, 00:16:44.997 { 00:16:44.997 "name": null, 00:16:44.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.997 "is_configured": false, 00:16:44.997 "data_offset": 2048, 00:16:44.997 "data_size": 63488 00:16:44.997 }, 00:16:44.997 { 00:16:44.997 "name": "BaseBdev3", 00:16:44.997 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:44.997 "is_configured": true, 00:16:44.997 "data_offset": 2048, 00:16:44.997 "data_size": 63488 00:16:44.997 }, 00:16:44.997 { 00:16:44.997 "name": "BaseBdev4", 00:16:44.997 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:44.997 "is_configured": true, 00:16:44.997 "data_offset": 2048, 00:16:44.997 "data_size": 63488 00:16:44.997 } 00:16:44.997 ] 00:16:44.997 }' 00:16:44.997 14:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.997 14:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.255 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:45.255 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.255 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:45.255 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:45.255 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.255 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.256 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.256 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.256 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.256 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.256 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.256 "name": "raid_bdev1", 00:16:45.256 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:45.256 "strip_size_kb": 0, 00:16:45.256 "state": "online", 00:16:45.256 "raid_level": "raid1", 00:16:45.256 "superblock": true, 00:16:45.256 "num_base_bdevs": 4, 00:16:45.256 "num_base_bdevs_discovered": 2, 00:16:45.256 "num_base_bdevs_operational": 2, 00:16:45.256 "base_bdevs_list": [ 00:16:45.256 { 00:16:45.256 "name": null, 00:16:45.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.256 "is_configured": false, 00:16:45.256 "data_offset": 0, 00:16:45.256 "data_size": 63488 00:16:45.256 }, 00:16:45.256 { 00:16:45.256 "name": null, 00:16:45.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.256 "is_configured": false, 00:16:45.256 "data_offset": 2048, 00:16:45.256 "data_size": 63488 00:16:45.256 }, 00:16:45.256 { 00:16:45.256 "name": "BaseBdev3", 00:16:45.256 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:45.256 "is_configured": true, 00:16:45.256 "data_offset": 2048, 00:16:45.256 "data_size": 63488 00:16:45.256 }, 00:16:45.256 { 00:16:45.256 "name": "BaseBdev4", 00:16:45.256 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:45.256 "is_configured": true, 00:16:45.256 "data_offset": 2048, 00:16:45.256 "data_size": 63488 00:16:45.256 } 00:16:45.256 ] 00:16:45.256 }' 00:16:45.256 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.256 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:45.256 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.514 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:45.514 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:45.514 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:16:45.514 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:45.515 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:45.515 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:45.515 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:45.515 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:45.515 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:45.515 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.515 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.515 [2024-11-20 14:27:24.266555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.515 [2024-11-20 14:27:24.266829] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:45.515 [2024-11-20 14:27:24.266860] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:45.515 request: 00:16:45.515 { 00:16:45.515 "base_bdev": "BaseBdev1", 00:16:45.515 "raid_bdev": "raid_bdev1", 00:16:45.515 "method": "bdev_raid_add_base_bdev", 00:16:45.515 "req_id": 1 00:16:45.515 } 00:16:45.515 Got JSON-RPC error response 00:16:45.515 response: 00:16:45.515 { 00:16:45.515 "code": -22, 00:16:45.515 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:45.515 } 00:16:45.515 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:45.515 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:16:45.515 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:45.515 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:45.515 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:45.515 14:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:46.451 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:46.451 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.451 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.451 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.451 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.451 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:46.451 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.451 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.451 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.451 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.451 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.451 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.451 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.451 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.451 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.451 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.451 "name": "raid_bdev1", 00:16:46.451 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:46.451 "strip_size_kb": 0, 00:16:46.451 "state": "online", 00:16:46.451 "raid_level": "raid1", 00:16:46.451 "superblock": true, 00:16:46.451 "num_base_bdevs": 4, 00:16:46.451 "num_base_bdevs_discovered": 2, 00:16:46.451 "num_base_bdevs_operational": 2, 00:16:46.451 "base_bdevs_list": [ 00:16:46.451 { 00:16:46.451 "name": null, 00:16:46.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.451 "is_configured": false, 00:16:46.451 "data_offset": 0, 00:16:46.451 "data_size": 63488 00:16:46.451 }, 00:16:46.451 { 00:16:46.451 "name": null, 00:16:46.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.451 "is_configured": false, 00:16:46.451 "data_offset": 2048, 00:16:46.451 "data_size": 63488 00:16:46.451 }, 00:16:46.451 { 00:16:46.451 "name": "BaseBdev3", 00:16:46.451 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:46.451 "is_configured": true, 00:16:46.451 "data_offset": 2048, 00:16:46.451 "data_size": 63488 00:16:46.451 }, 00:16:46.451 { 00:16:46.451 "name": "BaseBdev4", 00:16:46.451 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:46.451 "is_configured": true, 00:16:46.451 "data_offset": 2048, 00:16:46.451 "data_size": 63488 00:16:46.451 } 00:16:46.451 ] 00:16:46.451 }' 00:16:46.451 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.451 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.020 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:47.020 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.020 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:47.020 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:47.020 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.020 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.020 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.020 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.021 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.021 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.021 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.021 "name": "raid_bdev1", 00:16:47.021 "uuid": "f8e981be-80a9-4d68-aef4-2b02fae46517", 00:16:47.021 "strip_size_kb": 0, 00:16:47.021 "state": "online", 00:16:47.021 "raid_level": "raid1", 00:16:47.021 "superblock": true, 00:16:47.021 "num_base_bdevs": 4, 00:16:47.021 "num_base_bdevs_discovered": 2, 00:16:47.021 "num_base_bdevs_operational": 2, 00:16:47.021 "base_bdevs_list": [ 00:16:47.021 { 00:16:47.021 "name": null, 00:16:47.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.021 "is_configured": false, 00:16:47.021 "data_offset": 0, 00:16:47.021 "data_size": 63488 00:16:47.021 }, 00:16:47.021 { 00:16:47.021 "name": null, 00:16:47.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.021 "is_configured": false, 00:16:47.021 "data_offset": 2048, 00:16:47.021 "data_size": 63488 00:16:47.021 }, 00:16:47.021 { 00:16:47.021 "name": "BaseBdev3", 00:16:47.021 "uuid": "b2c3b3ee-f662-55eb-85b2-3ba966c11935", 00:16:47.021 "is_configured": true, 00:16:47.021 "data_offset": 2048, 00:16:47.021 "data_size": 63488 00:16:47.021 }, 00:16:47.021 { 00:16:47.021 "name": "BaseBdev4", 00:16:47.021 "uuid": "0660cdfa-129c-5e86-bd81-2446c60a8c53", 00:16:47.021 "is_configured": true, 00:16:47.021 "data_offset": 2048, 00:16:47.021 "data_size": 63488 00:16:47.021 } 00:16:47.021 ] 00:16:47.021 }' 00:16:47.021 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.021 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:47.021 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.021 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:47.021 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79515 00:16:47.021 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79515 ']' 00:16:47.021 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79515 00:16:47.021 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:16:47.021 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.021 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79515 00:16:47.021 killing process with pid 79515 00:16:47.021 Received shutdown signal, test time was about 20.429034 seconds 00:16:47.021 00:16:47.021 Latency(us) 00:16:47.021 [2024-11-20T14:27:26.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.021 [2024-11-20T14:27:26.003Z] =================================================================================================================== 00:16:47.021 [2024-11-20T14:27:26.003Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:47.021 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:47.021 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:47.021 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79515' 00:16:47.021 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79515 00:16:47.021 [2024-11-20 14:27:25.978288] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:47.021 14:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79515 00:16:47.021 [2024-11-20 14:27:25.978445] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.021 [2024-11-20 14:27:25.978531] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:47.021 [2024-11-20 14:27:25.978552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:47.682 [2024-11-20 14:27:26.355266] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:48.625 14:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:48.625 00:16:48.625 real 0m24.143s 00:16:48.625 user 0m32.669s 00:16:48.625 sys 0m2.428s 00:16:48.625 14:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:48.625 ************************************ 00:16:48.625 END TEST raid_rebuild_test_sb_io 00:16:48.625 ************************************ 00:16:48.625 14:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.625 14:27:27 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:48.625 14:27:27 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:16:48.625 14:27:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:48.625 14:27:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:48.625 14:27:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:48.625 ************************************ 00:16:48.625 START TEST raid5f_state_function_test 00:16:48.625 ************************************ 00:16:48.625 14:27:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:16:48.625 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:48.625 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:48.625 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:48.625 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:48.625 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:48.625 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:48.625 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:48.625 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:48.625 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:48.625 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80269 00:16:48.626 Process raid pid: 80269 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80269' 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80269 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80269 ']' 00:16:48.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:48.626 14:27:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.889 [2024-11-20 14:27:27.637895] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:16:48.889 [2024-11-20 14:27:27.638108] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.889 [2024-11-20 14:27:27.830186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.149 [2024-11-20 14:27:27.990850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.415 [2024-11-20 14:27:28.206087] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.415 [2024-11-20 14:27:28.206140] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.983 14:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.983 14:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:49.983 14:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:49.983 14:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.983 14:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.983 [2024-11-20 14:27:28.665365] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:49.983 [2024-11-20 14:27:28.665443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:49.983 [2024-11-20 14:27:28.665462] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:49.983 [2024-11-20 14:27:28.665479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:49.983 [2024-11-20 14:27:28.665489] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:49.983 [2024-11-20 14:27:28.665504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:49.983 14:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.983 14:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:49.983 14:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.983 14:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.983 14:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.983 14:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.983 14:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:49.983 14:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.983 14:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.983 14:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.983 14:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.983 14:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.983 14:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.983 14:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.983 14:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.983 14:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.983 14:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.983 "name": "Existed_Raid", 00:16:49.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.983 "strip_size_kb": 64, 00:16:49.983 "state": "configuring", 00:16:49.983 "raid_level": "raid5f", 00:16:49.983 "superblock": false, 00:16:49.983 "num_base_bdevs": 3, 00:16:49.983 "num_base_bdevs_discovered": 0, 00:16:49.983 "num_base_bdevs_operational": 3, 00:16:49.983 "base_bdevs_list": [ 00:16:49.983 { 00:16:49.983 "name": "BaseBdev1", 00:16:49.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.983 "is_configured": false, 00:16:49.983 "data_offset": 0, 00:16:49.983 "data_size": 0 00:16:49.983 }, 00:16:49.983 { 00:16:49.983 "name": "BaseBdev2", 00:16:49.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.983 "is_configured": false, 00:16:49.983 "data_offset": 0, 00:16:49.983 "data_size": 0 00:16:49.983 }, 00:16:49.983 { 00:16:49.983 "name": "BaseBdev3", 00:16:49.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.983 "is_configured": false, 00:16:49.983 "data_offset": 0, 00:16:49.983 "data_size": 0 00:16:49.983 } 00:16:49.983 ] 00:16:49.983 }' 00:16:49.983 14:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.983 14:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.243 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:50.243 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.243 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.243 [2024-11-20 14:27:29.193431] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:50.243 [2024-11-20 14:27:29.193478] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:50.243 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.243 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:50.243 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.243 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.243 [2024-11-20 14:27:29.201409] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:50.243 [2024-11-20 14:27:29.201470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:50.243 [2024-11-20 14:27:29.201485] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:50.243 [2024-11-20 14:27:29.201502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:50.243 [2024-11-20 14:27:29.201512] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:50.243 [2024-11-20 14:27:29.201526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:50.243 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.243 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:50.243 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.243 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.501 [2024-11-20 14:27:29.246723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.501 BaseBdev1 00:16:50.501 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.501 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:50.501 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:50.501 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:50.501 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:50.501 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:50.501 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:50.501 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.502 [ 00:16:50.502 { 00:16:50.502 "name": "BaseBdev1", 00:16:50.502 "aliases": [ 00:16:50.502 "dfc48390-c298-4273-8d1c-960335d698d3" 00:16:50.502 ], 00:16:50.502 "product_name": "Malloc disk", 00:16:50.502 "block_size": 512, 00:16:50.502 "num_blocks": 65536, 00:16:50.502 "uuid": "dfc48390-c298-4273-8d1c-960335d698d3", 00:16:50.502 "assigned_rate_limits": { 00:16:50.502 "rw_ios_per_sec": 0, 00:16:50.502 "rw_mbytes_per_sec": 0, 00:16:50.502 "r_mbytes_per_sec": 0, 00:16:50.502 "w_mbytes_per_sec": 0 00:16:50.502 }, 00:16:50.502 "claimed": true, 00:16:50.502 "claim_type": "exclusive_write", 00:16:50.502 "zoned": false, 00:16:50.502 "supported_io_types": { 00:16:50.502 "read": true, 00:16:50.502 "write": true, 00:16:50.502 "unmap": true, 00:16:50.502 "flush": true, 00:16:50.502 "reset": true, 00:16:50.502 "nvme_admin": false, 00:16:50.502 "nvme_io": false, 00:16:50.502 "nvme_io_md": false, 00:16:50.502 "write_zeroes": true, 00:16:50.502 "zcopy": true, 00:16:50.502 "get_zone_info": false, 00:16:50.502 "zone_management": false, 00:16:50.502 "zone_append": false, 00:16:50.502 "compare": false, 00:16:50.502 "compare_and_write": false, 00:16:50.502 "abort": true, 00:16:50.502 "seek_hole": false, 00:16:50.502 "seek_data": false, 00:16:50.502 "copy": true, 00:16:50.502 "nvme_iov_md": false 00:16:50.502 }, 00:16:50.502 "memory_domains": [ 00:16:50.502 { 00:16:50.502 "dma_device_id": "system", 00:16:50.502 "dma_device_type": 1 00:16:50.502 }, 00:16:50.502 { 00:16:50.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.502 "dma_device_type": 2 00:16:50.502 } 00:16:50.502 ], 00:16:50.502 "driver_specific": {} 00:16:50.502 } 00:16:50.502 ] 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.502 "name": "Existed_Raid", 00:16:50.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.502 "strip_size_kb": 64, 00:16:50.502 "state": "configuring", 00:16:50.502 "raid_level": "raid5f", 00:16:50.502 "superblock": false, 00:16:50.502 "num_base_bdevs": 3, 00:16:50.502 "num_base_bdevs_discovered": 1, 00:16:50.502 "num_base_bdevs_operational": 3, 00:16:50.502 "base_bdevs_list": [ 00:16:50.502 { 00:16:50.502 "name": "BaseBdev1", 00:16:50.502 "uuid": "dfc48390-c298-4273-8d1c-960335d698d3", 00:16:50.502 "is_configured": true, 00:16:50.502 "data_offset": 0, 00:16:50.502 "data_size": 65536 00:16:50.502 }, 00:16:50.502 { 00:16:50.502 "name": "BaseBdev2", 00:16:50.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.502 "is_configured": false, 00:16:50.502 "data_offset": 0, 00:16:50.502 "data_size": 0 00:16:50.502 }, 00:16:50.502 { 00:16:50.502 "name": "BaseBdev3", 00:16:50.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.502 "is_configured": false, 00:16:50.502 "data_offset": 0, 00:16:50.502 "data_size": 0 00:16:50.502 } 00:16:50.502 ] 00:16:50.502 }' 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.502 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.069 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:51.069 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.069 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.069 [2024-11-20 14:27:29.794917] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:51.070 [2024-11-20 14:27:29.794983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.070 [2024-11-20 14:27:29.806965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:51.070 [2024-11-20 14:27:29.809640] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:51.070 [2024-11-20 14:27:29.809813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:51.070 [2024-11-20 14:27:29.809944] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:51.070 [2024-11-20 14:27:29.810032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.070 "name": "Existed_Raid", 00:16:51.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.070 "strip_size_kb": 64, 00:16:51.070 "state": "configuring", 00:16:51.070 "raid_level": "raid5f", 00:16:51.070 "superblock": false, 00:16:51.070 "num_base_bdevs": 3, 00:16:51.070 "num_base_bdevs_discovered": 1, 00:16:51.070 "num_base_bdevs_operational": 3, 00:16:51.070 "base_bdevs_list": [ 00:16:51.070 { 00:16:51.070 "name": "BaseBdev1", 00:16:51.070 "uuid": "dfc48390-c298-4273-8d1c-960335d698d3", 00:16:51.070 "is_configured": true, 00:16:51.070 "data_offset": 0, 00:16:51.070 "data_size": 65536 00:16:51.070 }, 00:16:51.070 { 00:16:51.070 "name": "BaseBdev2", 00:16:51.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.070 "is_configured": false, 00:16:51.070 "data_offset": 0, 00:16:51.070 "data_size": 0 00:16:51.070 }, 00:16:51.070 { 00:16:51.070 "name": "BaseBdev3", 00:16:51.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.070 "is_configured": false, 00:16:51.070 "data_offset": 0, 00:16:51.070 "data_size": 0 00:16:51.070 } 00:16:51.070 ] 00:16:51.070 }' 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.070 14:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.638 14:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:51.638 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.638 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.638 [2024-11-20 14:27:30.369785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:51.638 BaseBdev2 00:16:51.638 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.638 14:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:51.638 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:51.638 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:51.638 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:51.638 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:51.638 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:51.638 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:51.638 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.638 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.638 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.638 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:51.639 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.639 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.639 [ 00:16:51.639 { 00:16:51.639 "name": "BaseBdev2", 00:16:51.639 "aliases": [ 00:16:51.639 "ebcbd91a-54d4-44d5-b36b-2602aaa01d43" 00:16:51.639 ], 00:16:51.639 "product_name": "Malloc disk", 00:16:51.639 "block_size": 512, 00:16:51.639 "num_blocks": 65536, 00:16:51.639 "uuid": "ebcbd91a-54d4-44d5-b36b-2602aaa01d43", 00:16:51.639 "assigned_rate_limits": { 00:16:51.639 "rw_ios_per_sec": 0, 00:16:51.639 "rw_mbytes_per_sec": 0, 00:16:51.639 "r_mbytes_per_sec": 0, 00:16:51.639 "w_mbytes_per_sec": 0 00:16:51.639 }, 00:16:51.639 "claimed": true, 00:16:51.639 "claim_type": "exclusive_write", 00:16:51.639 "zoned": false, 00:16:51.639 "supported_io_types": { 00:16:51.639 "read": true, 00:16:51.639 "write": true, 00:16:51.639 "unmap": true, 00:16:51.639 "flush": true, 00:16:51.639 "reset": true, 00:16:51.639 "nvme_admin": false, 00:16:51.639 "nvme_io": false, 00:16:51.639 "nvme_io_md": false, 00:16:51.639 "write_zeroes": true, 00:16:51.639 "zcopy": true, 00:16:51.639 "get_zone_info": false, 00:16:51.639 "zone_management": false, 00:16:51.639 "zone_append": false, 00:16:51.639 "compare": false, 00:16:51.639 "compare_and_write": false, 00:16:51.639 "abort": true, 00:16:51.639 "seek_hole": false, 00:16:51.639 "seek_data": false, 00:16:51.639 "copy": true, 00:16:51.639 "nvme_iov_md": false 00:16:51.639 }, 00:16:51.639 "memory_domains": [ 00:16:51.639 { 00:16:51.639 "dma_device_id": "system", 00:16:51.639 "dma_device_type": 1 00:16:51.639 }, 00:16:51.639 { 00:16:51.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.639 "dma_device_type": 2 00:16:51.639 } 00:16:51.639 ], 00:16:51.639 "driver_specific": {} 00:16:51.639 } 00:16:51.639 ] 00:16:51.639 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.639 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:51.639 14:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:51.639 14:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:51.639 14:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:51.639 14:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.639 14:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.639 14:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.639 14:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.639 14:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.639 14:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.639 14:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.639 14:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.639 14:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.639 14:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.639 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.639 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.639 14:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.639 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.639 14:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.639 "name": "Existed_Raid", 00:16:51.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.639 "strip_size_kb": 64, 00:16:51.639 "state": "configuring", 00:16:51.639 "raid_level": "raid5f", 00:16:51.639 "superblock": false, 00:16:51.639 "num_base_bdevs": 3, 00:16:51.639 "num_base_bdevs_discovered": 2, 00:16:51.639 "num_base_bdevs_operational": 3, 00:16:51.639 "base_bdevs_list": [ 00:16:51.639 { 00:16:51.639 "name": "BaseBdev1", 00:16:51.639 "uuid": "dfc48390-c298-4273-8d1c-960335d698d3", 00:16:51.639 "is_configured": true, 00:16:51.639 "data_offset": 0, 00:16:51.639 "data_size": 65536 00:16:51.639 }, 00:16:51.639 { 00:16:51.639 "name": "BaseBdev2", 00:16:51.639 "uuid": "ebcbd91a-54d4-44d5-b36b-2602aaa01d43", 00:16:51.639 "is_configured": true, 00:16:51.639 "data_offset": 0, 00:16:51.639 "data_size": 65536 00:16:51.639 }, 00:16:51.639 { 00:16:51.639 "name": "BaseBdev3", 00:16:51.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.639 "is_configured": false, 00:16:51.639 "data_offset": 0, 00:16:51.639 "data_size": 0 00:16:51.639 } 00:16:51.639 ] 00:16:51.639 }' 00:16:51.639 14:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.639 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.219 14:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:52.219 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.219 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.219 [2024-11-20 14:27:30.973370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:52.219 [2024-11-20 14:27:30.973468] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:52.219 [2024-11-20 14:27:30.973493] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:52.219 [2024-11-20 14:27:30.973832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:52.219 [2024-11-20 14:27:30.979208] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:52.219 [2024-11-20 14:27:30.979237] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:52.219 [2024-11-20 14:27:30.979680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.219 BaseBdev3 00:16:52.219 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.219 14:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:52.219 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:52.219 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:52.220 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:52.220 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:52.220 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:52.220 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:52.220 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.220 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.220 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.220 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:52.220 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.220 14:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.220 [ 00:16:52.220 { 00:16:52.220 "name": "BaseBdev3", 00:16:52.220 "aliases": [ 00:16:52.220 "c02dc1b0-7ef8-4698-be5e-35127f7ab63e" 00:16:52.220 ], 00:16:52.220 "product_name": "Malloc disk", 00:16:52.220 "block_size": 512, 00:16:52.220 "num_blocks": 65536, 00:16:52.220 "uuid": "c02dc1b0-7ef8-4698-be5e-35127f7ab63e", 00:16:52.220 "assigned_rate_limits": { 00:16:52.220 "rw_ios_per_sec": 0, 00:16:52.220 "rw_mbytes_per_sec": 0, 00:16:52.220 "r_mbytes_per_sec": 0, 00:16:52.220 "w_mbytes_per_sec": 0 00:16:52.220 }, 00:16:52.220 "claimed": true, 00:16:52.220 "claim_type": "exclusive_write", 00:16:52.220 "zoned": false, 00:16:52.220 "supported_io_types": { 00:16:52.220 "read": true, 00:16:52.220 "write": true, 00:16:52.220 "unmap": true, 00:16:52.220 "flush": true, 00:16:52.220 "reset": true, 00:16:52.220 "nvme_admin": false, 00:16:52.220 "nvme_io": false, 00:16:52.220 "nvme_io_md": false, 00:16:52.220 "write_zeroes": true, 00:16:52.220 "zcopy": true, 00:16:52.220 "get_zone_info": false, 00:16:52.220 "zone_management": false, 00:16:52.220 "zone_append": false, 00:16:52.220 "compare": false, 00:16:52.220 "compare_and_write": false, 00:16:52.220 "abort": true, 00:16:52.220 "seek_hole": false, 00:16:52.220 "seek_data": false, 00:16:52.220 "copy": true, 00:16:52.220 "nvme_iov_md": false 00:16:52.220 }, 00:16:52.220 "memory_domains": [ 00:16:52.220 { 00:16:52.220 "dma_device_id": "system", 00:16:52.220 "dma_device_type": 1 00:16:52.220 }, 00:16:52.220 { 00:16:52.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.220 "dma_device_type": 2 00:16:52.220 } 00:16:52.220 ], 00:16:52.220 "driver_specific": {} 00:16:52.220 } 00:16:52.220 ] 00:16:52.220 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.220 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:52.220 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:52.220 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:52.220 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:52.220 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.220 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.220 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.220 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.220 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.220 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.220 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.220 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.220 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.220 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.220 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.220 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.220 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.220 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.220 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.220 "name": "Existed_Raid", 00:16:52.220 "uuid": "ceb8def3-fcf4-4780-91af-addc8a3d209b", 00:16:52.220 "strip_size_kb": 64, 00:16:52.220 "state": "online", 00:16:52.220 "raid_level": "raid5f", 00:16:52.220 "superblock": false, 00:16:52.220 "num_base_bdevs": 3, 00:16:52.220 "num_base_bdevs_discovered": 3, 00:16:52.220 "num_base_bdevs_operational": 3, 00:16:52.220 "base_bdevs_list": [ 00:16:52.220 { 00:16:52.220 "name": "BaseBdev1", 00:16:52.220 "uuid": "dfc48390-c298-4273-8d1c-960335d698d3", 00:16:52.220 "is_configured": true, 00:16:52.220 "data_offset": 0, 00:16:52.220 "data_size": 65536 00:16:52.220 }, 00:16:52.220 { 00:16:52.220 "name": "BaseBdev2", 00:16:52.220 "uuid": "ebcbd91a-54d4-44d5-b36b-2602aaa01d43", 00:16:52.220 "is_configured": true, 00:16:52.220 "data_offset": 0, 00:16:52.220 "data_size": 65536 00:16:52.220 }, 00:16:52.220 { 00:16:52.220 "name": "BaseBdev3", 00:16:52.220 "uuid": "c02dc1b0-7ef8-4698-be5e-35127f7ab63e", 00:16:52.220 "is_configured": true, 00:16:52.220 "data_offset": 0, 00:16:52.220 "data_size": 65536 00:16:52.220 } 00:16:52.220 ] 00:16:52.220 }' 00:16:52.220 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.220 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:52.787 [2024-11-20 14:27:31.541693] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:52.787 "name": "Existed_Raid", 00:16:52.787 "aliases": [ 00:16:52.787 "ceb8def3-fcf4-4780-91af-addc8a3d209b" 00:16:52.787 ], 00:16:52.787 "product_name": "Raid Volume", 00:16:52.787 "block_size": 512, 00:16:52.787 "num_blocks": 131072, 00:16:52.787 "uuid": "ceb8def3-fcf4-4780-91af-addc8a3d209b", 00:16:52.787 "assigned_rate_limits": { 00:16:52.787 "rw_ios_per_sec": 0, 00:16:52.787 "rw_mbytes_per_sec": 0, 00:16:52.787 "r_mbytes_per_sec": 0, 00:16:52.787 "w_mbytes_per_sec": 0 00:16:52.787 }, 00:16:52.787 "claimed": false, 00:16:52.787 "zoned": false, 00:16:52.787 "supported_io_types": { 00:16:52.787 "read": true, 00:16:52.787 "write": true, 00:16:52.787 "unmap": false, 00:16:52.787 "flush": false, 00:16:52.787 "reset": true, 00:16:52.787 "nvme_admin": false, 00:16:52.787 "nvme_io": false, 00:16:52.787 "nvme_io_md": false, 00:16:52.787 "write_zeroes": true, 00:16:52.787 "zcopy": false, 00:16:52.787 "get_zone_info": false, 00:16:52.787 "zone_management": false, 00:16:52.787 "zone_append": false, 00:16:52.787 "compare": false, 00:16:52.787 "compare_and_write": false, 00:16:52.787 "abort": false, 00:16:52.787 "seek_hole": false, 00:16:52.787 "seek_data": false, 00:16:52.787 "copy": false, 00:16:52.787 "nvme_iov_md": false 00:16:52.787 }, 00:16:52.787 "driver_specific": { 00:16:52.787 "raid": { 00:16:52.787 "uuid": "ceb8def3-fcf4-4780-91af-addc8a3d209b", 00:16:52.787 "strip_size_kb": 64, 00:16:52.787 "state": "online", 00:16:52.787 "raid_level": "raid5f", 00:16:52.787 "superblock": false, 00:16:52.787 "num_base_bdevs": 3, 00:16:52.787 "num_base_bdevs_discovered": 3, 00:16:52.787 "num_base_bdevs_operational": 3, 00:16:52.787 "base_bdevs_list": [ 00:16:52.787 { 00:16:52.787 "name": "BaseBdev1", 00:16:52.787 "uuid": "dfc48390-c298-4273-8d1c-960335d698d3", 00:16:52.787 "is_configured": true, 00:16:52.787 "data_offset": 0, 00:16:52.787 "data_size": 65536 00:16:52.787 }, 00:16:52.787 { 00:16:52.787 "name": "BaseBdev2", 00:16:52.787 "uuid": "ebcbd91a-54d4-44d5-b36b-2602aaa01d43", 00:16:52.787 "is_configured": true, 00:16:52.787 "data_offset": 0, 00:16:52.787 "data_size": 65536 00:16:52.787 }, 00:16:52.787 { 00:16:52.787 "name": "BaseBdev3", 00:16:52.787 "uuid": "c02dc1b0-7ef8-4698-be5e-35127f7ab63e", 00:16:52.787 "is_configured": true, 00:16:52.787 "data_offset": 0, 00:16:52.787 "data_size": 65536 00:16:52.787 } 00:16:52.787 ] 00:16:52.787 } 00:16:52.787 } 00:16:52.787 }' 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:52.787 BaseBdev2 00:16:52.787 BaseBdev3' 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.787 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.046 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.046 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:53.046 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:53.046 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:53.046 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.046 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:53.046 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.046 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.046 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.046 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:53.046 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:53.046 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:53.046 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.046 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.046 [2024-11-20 14:27:31.853584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:53.046 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.046 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:53.046 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:53.046 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:53.046 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:53.046 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:53.046 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:53.046 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.046 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.046 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.047 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.047 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:53.047 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.047 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.047 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.047 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.047 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.047 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.047 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.047 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.047 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.047 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.047 "name": "Existed_Raid", 00:16:53.047 "uuid": "ceb8def3-fcf4-4780-91af-addc8a3d209b", 00:16:53.047 "strip_size_kb": 64, 00:16:53.047 "state": "online", 00:16:53.047 "raid_level": "raid5f", 00:16:53.047 "superblock": false, 00:16:53.047 "num_base_bdevs": 3, 00:16:53.047 "num_base_bdevs_discovered": 2, 00:16:53.047 "num_base_bdevs_operational": 2, 00:16:53.047 "base_bdevs_list": [ 00:16:53.047 { 00:16:53.047 "name": null, 00:16:53.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.047 "is_configured": false, 00:16:53.047 "data_offset": 0, 00:16:53.047 "data_size": 65536 00:16:53.047 }, 00:16:53.047 { 00:16:53.047 "name": "BaseBdev2", 00:16:53.047 "uuid": "ebcbd91a-54d4-44d5-b36b-2602aaa01d43", 00:16:53.047 "is_configured": true, 00:16:53.047 "data_offset": 0, 00:16:53.047 "data_size": 65536 00:16:53.047 }, 00:16:53.047 { 00:16:53.047 "name": "BaseBdev3", 00:16:53.047 "uuid": "c02dc1b0-7ef8-4698-be5e-35127f7ab63e", 00:16:53.047 "is_configured": true, 00:16:53.047 "data_offset": 0, 00:16:53.047 "data_size": 65536 00:16:53.047 } 00:16:53.047 ] 00:16:53.047 }' 00:16:53.047 14:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.047 14:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.614 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:53.614 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:53.614 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.614 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:53.614 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.614 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.614 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.614 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:53.614 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:53.614 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:53.614 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.614 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.614 [2024-11-20 14:27:32.526743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:53.614 [2024-11-20 14:27:32.526870] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:53.873 [2024-11-20 14:27:32.611181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.873 [2024-11-20 14:27:32.687293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:53.873 [2024-11-20 14:27:32.687372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.873 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.132 BaseBdev2 00:16:54.132 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.132 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:54.132 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:54.132 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.133 [ 00:16:54.133 { 00:16:54.133 "name": "BaseBdev2", 00:16:54.133 "aliases": [ 00:16:54.133 "b5a24f09-5289-4c84-8956-d5e7023787ed" 00:16:54.133 ], 00:16:54.133 "product_name": "Malloc disk", 00:16:54.133 "block_size": 512, 00:16:54.133 "num_blocks": 65536, 00:16:54.133 "uuid": "b5a24f09-5289-4c84-8956-d5e7023787ed", 00:16:54.133 "assigned_rate_limits": { 00:16:54.133 "rw_ios_per_sec": 0, 00:16:54.133 "rw_mbytes_per_sec": 0, 00:16:54.133 "r_mbytes_per_sec": 0, 00:16:54.133 "w_mbytes_per_sec": 0 00:16:54.133 }, 00:16:54.133 "claimed": false, 00:16:54.133 "zoned": false, 00:16:54.133 "supported_io_types": { 00:16:54.133 "read": true, 00:16:54.133 "write": true, 00:16:54.133 "unmap": true, 00:16:54.133 "flush": true, 00:16:54.133 "reset": true, 00:16:54.133 "nvme_admin": false, 00:16:54.133 "nvme_io": false, 00:16:54.133 "nvme_io_md": false, 00:16:54.133 "write_zeroes": true, 00:16:54.133 "zcopy": true, 00:16:54.133 "get_zone_info": false, 00:16:54.133 "zone_management": false, 00:16:54.133 "zone_append": false, 00:16:54.133 "compare": false, 00:16:54.133 "compare_and_write": false, 00:16:54.133 "abort": true, 00:16:54.133 "seek_hole": false, 00:16:54.133 "seek_data": false, 00:16:54.133 "copy": true, 00:16:54.133 "nvme_iov_md": false 00:16:54.133 }, 00:16:54.133 "memory_domains": [ 00:16:54.133 { 00:16:54.133 "dma_device_id": "system", 00:16:54.133 "dma_device_type": 1 00:16:54.133 }, 00:16:54.133 { 00:16:54.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.133 "dma_device_type": 2 00:16:54.133 } 00:16:54.133 ], 00:16:54.133 "driver_specific": {} 00:16:54.133 } 00:16:54.133 ] 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.133 BaseBdev3 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.133 [ 00:16:54.133 { 00:16:54.133 "name": "BaseBdev3", 00:16:54.133 "aliases": [ 00:16:54.133 "688166a5-4661-4ad3-9a6c-22025740df26" 00:16:54.133 ], 00:16:54.133 "product_name": "Malloc disk", 00:16:54.133 "block_size": 512, 00:16:54.133 "num_blocks": 65536, 00:16:54.133 "uuid": "688166a5-4661-4ad3-9a6c-22025740df26", 00:16:54.133 "assigned_rate_limits": { 00:16:54.133 "rw_ios_per_sec": 0, 00:16:54.133 "rw_mbytes_per_sec": 0, 00:16:54.133 "r_mbytes_per_sec": 0, 00:16:54.133 "w_mbytes_per_sec": 0 00:16:54.133 }, 00:16:54.133 "claimed": false, 00:16:54.133 "zoned": false, 00:16:54.133 "supported_io_types": { 00:16:54.133 "read": true, 00:16:54.133 "write": true, 00:16:54.133 "unmap": true, 00:16:54.133 "flush": true, 00:16:54.133 "reset": true, 00:16:54.133 "nvme_admin": false, 00:16:54.133 "nvme_io": false, 00:16:54.133 "nvme_io_md": false, 00:16:54.133 "write_zeroes": true, 00:16:54.133 "zcopy": true, 00:16:54.133 "get_zone_info": false, 00:16:54.133 "zone_management": false, 00:16:54.133 "zone_append": false, 00:16:54.133 "compare": false, 00:16:54.133 "compare_and_write": false, 00:16:54.133 "abort": true, 00:16:54.133 "seek_hole": false, 00:16:54.133 "seek_data": false, 00:16:54.133 "copy": true, 00:16:54.133 "nvme_iov_md": false 00:16:54.133 }, 00:16:54.133 "memory_domains": [ 00:16:54.133 { 00:16:54.133 "dma_device_id": "system", 00:16:54.133 "dma_device_type": 1 00:16:54.133 }, 00:16:54.133 { 00:16:54.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.133 "dma_device_type": 2 00:16:54.133 } 00:16:54.133 ], 00:16:54.133 "driver_specific": {} 00:16:54.133 } 00:16:54.133 ] 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.133 [2024-11-20 14:27:32.980713] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:54.133 [2024-11-20 14:27:32.980941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:54.133 [2024-11-20 14:27:32.981116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:54.133 [2024-11-20 14:27:32.983792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.133 14:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.133 14:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.133 14:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.133 "name": "Existed_Raid", 00:16:54.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.133 "strip_size_kb": 64, 00:16:54.133 "state": "configuring", 00:16:54.133 "raid_level": "raid5f", 00:16:54.133 "superblock": false, 00:16:54.133 "num_base_bdevs": 3, 00:16:54.133 "num_base_bdevs_discovered": 2, 00:16:54.133 "num_base_bdevs_operational": 3, 00:16:54.133 "base_bdevs_list": [ 00:16:54.133 { 00:16:54.133 "name": "BaseBdev1", 00:16:54.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.133 "is_configured": false, 00:16:54.133 "data_offset": 0, 00:16:54.133 "data_size": 0 00:16:54.133 }, 00:16:54.133 { 00:16:54.133 "name": "BaseBdev2", 00:16:54.133 "uuid": "b5a24f09-5289-4c84-8956-d5e7023787ed", 00:16:54.133 "is_configured": true, 00:16:54.134 "data_offset": 0, 00:16:54.134 "data_size": 65536 00:16:54.134 }, 00:16:54.134 { 00:16:54.134 "name": "BaseBdev3", 00:16:54.134 "uuid": "688166a5-4661-4ad3-9a6c-22025740df26", 00:16:54.134 "is_configured": true, 00:16:54.134 "data_offset": 0, 00:16:54.134 "data_size": 65536 00:16:54.134 } 00:16:54.134 ] 00:16:54.134 }' 00:16:54.134 14:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.134 14:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.702 14:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:54.702 14:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.702 14:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.702 [2024-11-20 14:27:33.512855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:54.702 14:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.702 14:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:54.702 14:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.702 14:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.702 14:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.702 14:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.702 14:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:54.702 14:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.702 14:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.702 14:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.702 14:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.702 14:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.702 14:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.702 14:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.702 14:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.702 14:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.702 14:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.702 "name": "Existed_Raid", 00:16:54.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.702 "strip_size_kb": 64, 00:16:54.702 "state": "configuring", 00:16:54.702 "raid_level": "raid5f", 00:16:54.702 "superblock": false, 00:16:54.702 "num_base_bdevs": 3, 00:16:54.702 "num_base_bdevs_discovered": 1, 00:16:54.702 "num_base_bdevs_operational": 3, 00:16:54.702 "base_bdevs_list": [ 00:16:54.702 { 00:16:54.702 "name": "BaseBdev1", 00:16:54.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.702 "is_configured": false, 00:16:54.702 "data_offset": 0, 00:16:54.702 "data_size": 0 00:16:54.702 }, 00:16:54.702 { 00:16:54.702 "name": null, 00:16:54.702 "uuid": "b5a24f09-5289-4c84-8956-d5e7023787ed", 00:16:54.702 "is_configured": false, 00:16:54.702 "data_offset": 0, 00:16:54.702 "data_size": 65536 00:16:54.702 }, 00:16:54.702 { 00:16:54.702 "name": "BaseBdev3", 00:16:54.702 "uuid": "688166a5-4661-4ad3-9a6c-22025740df26", 00:16:54.702 "is_configured": true, 00:16:54.702 "data_offset": 0, 00:16:54.702 "data_size": 65536 00:16:54.702 } 00:16:54.702 ] 00:16:54.702 }' 00:16:54.702 14:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.702 14:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.269 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:55.269 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.269 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.269 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.269 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.269 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:55.269 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:55.269 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.269 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.269 [2024-11-20 14:27:34.138910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:55.269 BaseBdev1 00:16:55.269 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.269 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:55.269 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:55.269 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:55.269 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:55.269 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:55.269 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:55.269 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:55.269 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.269 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.269 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.269 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:55.269 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.270 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.270 [ 00:16:55.270 { 00:16:55.270 "name": "BaseBdev1", 00:16:55.270 "aliases": [ 00:16:55.270 "94dce24d-9d88-4f22-ad11-7f81b67f5f61" 00:16:55.270 ], 00:16:55.270 "product_name": "Malloc disk", 00:16:55.270 "block_size": 512, 00:16:55.270 "num_blocks": 65536, 00:16:55.270 "uuid": "94dce24d-9d88-4f22-ad11-7f81b67f5f61", 00:16:55.270 "assigned_rate_limits": { 00:16:55.270 "rw_ios_per_sec": 0, 00:16:55.270 "rw_mbytes_per_sec": 0, 00:16:55.270 "r_mbytes_per_sec": 0, 00:16:55.270 "w_mbytes_per_sec": 0 00:16:55.270 }, 00:16:55.270 "claimed": true, 00:16:55.270 "claim_type": "exclusive_write", 00:16:55.270 "zoned": false, 00:16:55.270 "supported_io_types": { 00:16:55.270 "read": true, 00:16:55.270 "write": true, 00:16:55.270 "unmap": true, 00:16:55.270 "flush": true, 00:16:55.270 "reset": true, 00:16:55.270 "nvme_admin": false, 00:16:55.270 "nvme_io": false, 00:16:55.270 "nvme_io_md": false, 00:16:55.270 "write_zeroes": true, 00:16:55.270 "zcopy": true, 00:16:55.270 "get_zone_info": false, 00:16:55.270 "zone_management": false, 00:16:55.270 "zone_append": false, 00:16:55.270 "compare": false, 00:16:55.270 "compare_and_write": false, 00:16:55.270 "abort": true, 00:16:55.270 "seek_hole": false, 00:16:55.270 "seek_data": false, 00:16:55.270 "copy": true, 00:16:55.270 "nvme_iov_md": false 00:16:55.270 }, 00:16:55.270 "memory_domains": [ 00:16:55.270 { 00:16:55.270 "dma_device_id": "system", 00:16:55.270 "dma_device_type": 1 00:16:55.270 }, 00:16:55.270 { 00:16:55.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.270 "dma_device_type": 2 00:16:55.270 } 00:16:55.270 ], 00:16:55.270 "driver_specific": {} 00:16:55.270 } 00:16:55.270 ] 00:16:55.270 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.270 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:55.270 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:55.270 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.270 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.270 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.270 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.270 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:55.270 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.270 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.270 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.270 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.270 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.270 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.270 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.270 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.270 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.270 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.270 "name": "Existed_Raid", 00:16:55.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.270 "strip_size_kb": 64, 00:16:55.270 "state": "configuring", 00:16:55.270 "raid_level": "raid5f", 00:16:55.270 "superblock": false, 00:16:55.270 "num_base_bdevs": 3, 00:16:55.270 "num_base_bdevs_discovered": 2, 00:16:55.270 "num_base_bdevs_operational": 3, 00:16:55.270 "base_bdevs_list": [ 00:16:55.270 { 00:16:55.270 "name": "BaseBdev1", 00:16:55.270 "uuid": "94dce24d-9d88-4f22-ad11-7f81b67f5f61", 00:16:55.270 "is_configured": true, 00:16:55.270 "data_offset": 0, 00:16:55.270 "data_size": 65536 00:16:55.270 }, 00:16:55.270 { 00:16:55.270 "name": null, 00:16:55.270 "uuid": "b5a24f09-5289-4c84-8956-d5e7023787ed", 00:16:55.270 "is_configured": false, 00:16:55.270 "data_offset": 0, 00:16:55.270 "data_size": 65536 00:16:55.270 }, 00:16:55.270 { 00:16:55.270 "name": "BaseBdev3", 00:16:55.270 "uuid": "688166a5-4661-4ad3-9a6c-22025740df26", 00:16:55.270 "is_configured": true, 00:16:55.270 "data_offset": 0, 00:16:55.270 "data_size": 65536 00:16:55.270 } 00:16:55.270 ] 00:16:55.270 }' 00:16:55.270 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.270 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.838 [2024-11-20 14:27:34.771170] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.838 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.097 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.097 "name": "Existed_Raid", 00:16:56.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.097 "strip_size_kb": 64, 00:16:56.097 "state": "configuring", 00:16:56.097 "raid_level": "raid5f", 00:16:56.097 "superblock": false, 00:16:56.097 "num_base_bdevs": 3, 00:16:56.097 "num_base_bdevs_discovered": 1, 00:16:56.097 "num_base_bdevs_operational": 3, 00:16:56.097 "base_bdevs_list": [ 00:16:56.097 { 00:16:56.097 "name": "BaseBdev1", 00:16:56.097 "uuid": "94dce24d-9d88-4f22-ad11-7f81b67f5f61", 00:16:56.097 "is_configured": true, 00:16:56.097 "data_offset": 0, 00:16:56.097 "data_size": 65536 00:16:56.097 }, 00:16:56.097 { 00:16:56.097 "name": null, 00:16:56.097 "uuid": "b5a24f09-5289-4c84-8956-d5e7023787ed", 00:16:56.097 "is_configured": false, 00:16:56.097 "data_offset": 0, 00:16:56.097 "data_size": 65536 00:16:56.097 }, 00:16:56.097 { 00:16:56.097 "name": null, 00:16:56.097 "uuid": "688166a5-4661-4ad3-9a6c-22025740df26", 00:16:56.097 "is_configured": false, 00:16:56.097 "data_offset": 0, 00:16:56.097 "data_size": 65536 00:16:56.097 } 00:16:56.097 ] 00:16:56.097 }' 00:16:56.097 14:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.097 14:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.355 14:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.355 14:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:56.355 14:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.355 14:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.355 14:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.614 14:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:56.614 14:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:56.614 14:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.614 14:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.614 [2024-11-20 14:27:35.355363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:56.614 14:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.614 14:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:56.614 14:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.614 14:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.614 14:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.614 14:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.614 14:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:56.614 14:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.614 14:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.614 14:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.614 14:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.614 14:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.614 14:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.614 14:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.614 14:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.614 14:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.615 14:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.615 "name": "Existed_Raid", 00:16:56.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.615 "strip_size_kb": 64, 00:16:56.615 "state": "configuring", 00:16:56.615 "raid_level": "raid5f", 00:16:56.615 "superblock": false, 00:16:56.615 "num_base_bdevs": 3, 00:16:56.615 "num_base_bdevs_discovered": 2, 00:16:56.615 "num_base_bdevs_operational": 3, 00:16:56.615 "base_bdevs_list": [ 00:16:56.615 { 00:16:56.615 "name": "BaseBdev1", 00:16:56.615 "uuid": "94dce24d-9d88-4f22-ad11-7f81b67f5f61", 00:16:56.615 "is_configured": true, 00:16:56.615 "data_offset": 0, 00:16:56.615 "data_size": 65536 00:16:56.615 }, 00:16:56.615 { 00:16:56.615 "name": null, 00:16:56.615 "uuid": "b5a24f09-5289-4c84-8956-d5e7023787ed", 00:16:56.615 "is_configured": false, 00:16:56.615 "data_offset": 0, 00:16:56.615 "data_size": 65536 00:16:56.615 }, 00:16:56.615 { 00:16:56.615 "name": "BaseBdev3", 00:16:56.615 "uuid": "688166a5-4661-4ad3-9a6c-22025740df26", 00:16:56.615 "is_configured": true, 00:16:56.615 "data_offset": 0, 00:16:56.615 "data_size": 65536 00:16:56.615 } 00:16:56.615 ] 00:16:56.615 }' 00:16:56.615 14:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.615 14:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.183 14:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:57.183 14:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.183 14:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.183 14:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.183 14:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.183 14:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:57.183 14:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:57.183 14:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.183 14:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.183 [2024-11-20 14:27:35.979563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:57.183 14:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.183 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:57.183 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.183 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.183 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.183 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.183 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.183 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.183 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.183 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.183 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.183 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.183 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.183 14:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.183 14:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.183 14:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.183 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.183 "name": "Existed_Raid", 00:16:57.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.183 "strip_size_kb": 64, 00:16:57.183 "state": "configuring", 00:16:57.183 "raid_level": "raid5f", 00:16:57.183 "superblock": false, 00:16:57.183 "num_base_bdevs": 3, 00:16:57.183 "num_base_bdevs_discovered": 1, 00:16:57.183 "num_base_bdevs_operational": 3, 00:16:57.183 "base_bdevs_list": [ 00:16:57.183 { 00:16:57.183 "name": null, 00:16:57.183 "uuid": "94dce24d-9d88-4f22-ad11-7f81b67f5f61", 00:16:57.183 "is_configured": false, 00:16:57.183 "data_offset": 0, 00:16:57.183 "data_size": 65536 00:16:57.183 }, 00:16:57.183 { 00:16:57.183 "name": null, 00:16:57.183 "uuid": "b5a24f09-5289-4c84-8956-d5e7023787ed", 00:16:57.183 "is_configured": false, 00:16:57.183 "data_offset": 0, 00:16:57.183 "data_size": 65536 00:16:57.183 }, 00:16:57.183 { 00:16:57.183 "name": "BaseBdev3", 00:16:57.183 "uuid": "688166a5-4661-4ad3-9a6c-22025740df26", 00:16:57.183 "is_configured": true, 00:16:57.183 "data_offset": 0, 00:16:57.183 "data_size": 65536 00:16:57.183 } 00:16:57.183 ] 00:16:57.183 }' 00:16:57.183 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.183 14:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.752 [2024-11-20 14:27:36.659759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.752 "name": "Existed_Raid", 00:16:57.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.752 "strip_size_kb": 64, 00:16:57.752 "state": "configuring", 00:16:57.752 "raid_level": "raid5f", 00:16:57.752 "superblock": false, 00:16:57.752 "num_base_bdevs": 3, 00:16:57.752 "num_base_bdevs_discovered": 2, 00:16:57.752 "num_base_bdevs_operational": 3, 00:16:57.752 "base_bdevs_list": [ 00:16:57.752 { 00:16:57.752 "name": null, 00:16:57.752 "uuid": "94dce24d-9d88-4f22-ad11-7f81b67f5f61", 00:16:57.752 "is_configured": false, 00:16:57.752 "data_offset": 0, 00:16:57.752 "data_size": 65536 00:16:57.752 }, 00:16:57.752 { 00:16:57.752 "name": "BaseBdev2", 00:16:57.752 "uuid": "b5a24f09-5289-4c84-8956-d5e7023787ed", 00:16:57.752 "is_configured": true, 00:16:57.752 "data_offset": 0, 00:16:57.752 "data_size": 65536 00:16:57.752 }, 00:16:57.752 { 00:16:57.752 "name": "BaseBdev3", 00:16:57.752 "uuid": "688166a5-4661-4ad3-9a6c-22025740df26", 00:16:57.752 "is_configured": true, 00:16:57.752 "data_offset": 0, 00:16:57.752 "data_size": 65536 00:16:57.752 } 00:16:57.752 ] 00:16:57.752 }' 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.752 14:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.320 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.320 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:58.320 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.320 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.320 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.320 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:58.320 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:58.320 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.320 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.320 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.320 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.320 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 94dce24d-9d88-4f22-ad11-7f81b67f5f61 00:16:58.320 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.320 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.320 [2024-11-20 14:27:37.298094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:58.320 [2024-11-20 14:27:37.298368] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:58.320 [2024-11-20 14:27:37.298401] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:58.320 [2024-11-20 14:27:37.298724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:58.580 [2024-11-20 14:27:37.303666] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:58.580 [2024-11-20 14:27:37.303693] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:58.580 [2024-11-20 14:27:37.304035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.580 NewBaseBdev 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.580 [ 00:16:58.580 { 00:16:58.580 "name": "NewBaseBdev", 00:16:58.580 "aliases": [ 00:16:58.580 "94dce24d-9d88-4f22-ad11-7f81b67f5f61" 00:16:58.580 ], 00:16:58.580 "product_name": "Malloc disk", 00:16:58.580 "block_size": 512, 00:16:58.580 "num_blocks": 65536, 00:16:58.580 "uuid": "94dce24d-9d88-4f22-ad11-7f81b67f5f61", 00:16:58.580 "assigned_rate_limits": { 00:16:58.580 "rw_ios_per_sec": 0, 00:16:58.580 "rw_mbytes_per_sec": 0, 00:16:58.580 "r_mbytes_per_sec": 0, 00:16:58.580 "w_mbytes_per_sec": 0 00:16:58.580 }, 00:16:58.580 "claimed": true, 00:16:58.580 "claim_type": "exclusive_write", 00:16:58.580 "zoned": false, 00:16:58.580 "supported_io_types": { 00:16:58.580 "read": true, 00:16:58.580 "write": true, 00:16:58.580 "unmap": true, 00:16:58.580 "flush": true, 00:16:58.580 "reset": true, 00:16:58.580 "nvme_admin": false, 00:16:58.580 "nvme_io": false, 00:16:58.580 "nvme_io_md": false, 00:16:58.580 "write_zeroes": true, 00:16:58.580 "zcopy": true, 00:16:58.580 "get_zone_info": false, 00:16:58.580 "zone_management": false, 00:16:58.580 "zone_append": false, 00:16:58.580 "compare": false, 00:16:58.580 "compare_and_write": false, 00:16:58.580 "abort": true, 00:16:58.580 "seek_hole": false, 00:16:58.580 "seek_data": false, 00:16:58.580 "copy": true, 00:16:58.580 "nvme_iov_md": false 00:16:58.580 }, 00:16:58.580 "memory_domains": [ 00:16:58.580 { 00:16:58.580 "dma_device_id": "system", 00:16:58.580 "dma_device_type": 1 00:16:58.580 }, 00:16:58.580 { 00:16:58.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.580 "dma_device_type": 2 00:16:58.580 } 00:16:58.580 ], 00:16:58.580 "driver_specific": {} 00:16:58.580 } 00:16:58.580 ] 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.580 "name": "Existed_Raid", 00:16:58.580 "uuid": "9f19aa3e-e335-46e4-8432-0a2514ef4fec", 00:16:58.580 "strip_size_kb": 64, 00:16:58.580 "state": "online", 00:16:58.580 "raid_level": "raid5f", 00:16:58.580 "superblock": false, 00:16:58.580 "num_base_bdevs": 3, 00:16:58.580 "num_base_bdevs_discovered": 3, 00:16:58.580 "num_base_bdevs_operational": 3, 00:16:58.580 "base_bdevs_list": [ 00:16:58.580 { 00:16:58.580 "name": "NewBaseBdev", 00:16:58.580 "uuid": "94dce24d-9d88-4f22-ad11-7f81b67f5f61", 00:16:58.580 "is_configured": true, 00:16:58.580 "data_offset": 0, 00:16:58.580 "data_size": 65536 00:16:58.580 }, 00:16:58.580 { 00:16:58.580 "name": "BaseBdev2", 00:16:58.580 "uuid": "b5a24f09-5289-4c84-8956-d5e7023787ed", 00:16:58.580 "is_configured": true, 00:16:58.580 "data_offset": 0, 00:16:58.580 "data_size": 65536 00:16:58.580 }, 00:16:58.580 { 00:16:58.580 "name": "BaseBdev3", 00:16:58.580 "uuid": "688166a5-4661-4ad3-9a6c-22025740df26", 00:16:58.580 "is_configured": true, 00:16:58.580 "data_offset": 0, 00:16:58.580 "data_size": 65536 00:16:58.580 } 00:16:58.580 ] 00:16:58.580 }' 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.580 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.148 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:59.148 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:59.148 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:59.148 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:59.148 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:59.148 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:59.148 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:59.148 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:59.148 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.148 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.148 [2024-11-20 14:27:37.853997] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.148 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.148 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:59.148 "name": "Existed_Raid", 00:16:59.148 "aliases": [ 00:16:59.148 "9f19aa3e-e335-46e4-8432-0a2514ef4fec" 00:16:59.148 ], 00:16:59.148 "product_name": "Raid Volume", 00:16:59.148 "block_size": 512, 00:16:59.148 "num_blocks": 131072, 00:16:59.148 "uuid": "9f19aa3e-e335-46e4-8432-0a2514ef4fec", 00:16:59.148 "assigned_rate_limits": { 00:16:59.148 "rw_ios_per_sec": 0, 00:16:59.148 "rw_mbytes_per_sec": 0, 00:16:59.148 "r_mbytes_per_sec": 0, 00:16:59.148 "w_mbytes_per_sec": 0 00:16:59.148 }, 00:16:59.148 "claimed": false, 00:16:59.148 "zoned": false, 00:16:59.148 "supported_io_types": { 00:16:59.148 "read": true, 00:16:59.148 "write": true, 00:16:59.148 "unmap": false, 00:16:59.148 "flush": false, 00:16:59.148 "reset": true, 00:16:59.148 "nvme_admin": false, 00:16:59.148 "nvme_io": false, 00:16:59.148 "nvme_io_md": false, 00:16:59.148 "write_zeroes": true, 00:16:59.148 "zcopy": false, 00:16:59.148 "get_zone_info": false, 00:16:59.148 "zone_management": false, 00:16:59.148 "zone_append": false, 00:16:59.148 "compare": false, 00:16:59.148 "compare_and_write": false, 00:16:59.148 "abort": false, 00:16:59.148 "seek_hole": false, 00:16:59.148 "seek_data": false, 00:16:59.148 "copy": false, 00:16:59.148 "nvme_iov_md": false 00:16:59.148 }, 00:16:59.148 "driver_specific": { 00:16:59.148 "raid": { 00:16:59.148 "uuid": "9f19aa3e-e335-46e4-8432-0a2514ef4fec", 00:16:59.148 "strip_size_kb": 64, 00:16:59.148 "state": "online", 00:16:59.148 "raid_level": "raid5f", 00:16:59.148 "superblock": false, 00:16:59.148 "num_base_bdevs": 3, 00:16:59.148 "num_base_bdevs_discovered": 3, 00:16:59.148 "num_base_bdevs_operational": 3, 00:16:59.148 "base_bdevs_list": [ 00:16:59.148 { 00:16:59.148 "name": "NewBaseBdev", 00:16:59.148 "uuid": "94dce24d-9d88-4f22-ad11-7f81b67f5f61", 00:16:59.148 "is_configured": true, 00:16:59.148 "data_offset": 0, 00:16:59.148 "data_size": 65536 00:16:59.148 }, 00:16:59.148 { 00:16:59.149 "name": "BaseBdev2", 00:16:59.149 "uuid": "b5a24f09-5289-4c84-8956-d5e7023787ed", 00:16:59.149 "is_configured": true, 00:16:59.149 "data_offset": 0, 00:16:59.149 "data_size": 65536 00:16:59.149 }, 00:16:59.149 { 00:16:59.149 "name": "BaseBdev3", 00:16:59.149 "uuid": "688166a5-4661-4ad3-9a6c-22025740df26", 00:16:59.149 "is_configured": true, 00:16:59.149 "data_offset": 0, 00:16:59.149 "data_size": 65536 00:16:59.149 } 00:16:59.149 ] 00:16:59.149 } 00:16:59.149 } 00:16:59.149 }' 00:16:59.149 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:59.149 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:59.149 BaseBdev2 00:16:59.149 BaseBdev3' 00:16:59.149 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.149 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:59.149 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.149 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:59.149 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.149 14:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.149 14:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.149 14:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.149 14:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:59.149 14:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:59.149 14:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.149 14:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.149 14:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:59.149 14:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.149 14:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.149 14:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.149 14:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:59.149 14:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:59.149 14:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.149 14:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.149 14:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:59.149 14:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.149 14:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.149 14:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.409 14:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:59.409 14:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:59.409 14:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:59.409 14:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.409 14:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.409 [2024-11-20 14:27:38.157827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:59.409 [2024-11-20 14:27:38.157999] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.409 [2024-11-20 14:27:38.158126] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.409 [2024-11-20 14:27:38.158499] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:59.409 [2024-11-20 14:27:38.158522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:59.409 14:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.409 14:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80269 00:16:59.409 14:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80269 ']' 00:16:59.409 14:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80269 00:16:59.409 14:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:59.409 14:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.409 14:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80269 00:16:59.409 14:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:59.409 killing process with pid 80269 00:16:59.409 14:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:59.409 14:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80269' 00:16:59.409 14:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80269 00:16:59.409 [2024-11-20 14:27:38.197321] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:59.409 14:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80269 00:16:59.668 [2024-11-20 14:27:38.472610] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:00.605 00:17:00.605 real 0m11.990s 00:17:00.605 user 0m19.886s 00:17:00.605 sys 0m1.700s 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.605 ************************************ 00:17:00.605 END TEST raid5f_state_function_test 00:17:00.605 ************************************ 00:17:00.605 14:27:39 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:17:00.605 14:27:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:00.605 14:27:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.605 14:27:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:00.605 ************************************ 00:17:00.605 START TEST raid5f_state_function_test_sb 00:17:00.605 ************************************ 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80906 00:17:00.605 Process raid pid: 80906 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80906' 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80906 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80906 ']' 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.605 14:27:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.864 [2024-11-20 14:27:39.694715] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:17:00.864 [2024-11-20 14:27:39.694922] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.124 [2024-11-20 14:27:39.891761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.124 [2024-11-20 14:27:40.045238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.382 [2024-11-20 14:27:40.257236] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.382 [2024-11-20 14:27:40.257297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.950 14:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.950 14:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:01.950 14:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:01.950 14:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.950 14:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.950 [2024-11-20 14:27:40.763470] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:01.950 [2024-11-20 14:27:40.763561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:01.950 [2024-11-20 14:27:40.763580] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:01.950 [2024-11-20 14:27:40.763597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:01.950 [2024-11-20 14:27:40.763608] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:01.950 [2024-11-20 14:27:40.763623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:01.950 14:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.950 14:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:01.950 14:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.950 14:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.950 14:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.950 14:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.950 14:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.950 14:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.950 14:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.950 14:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.950 14:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.950 14:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.950 14:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.950 14:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.950 14:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.950 14:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.950 14:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.950 "name": "Existed_Raid", 00:17:01.950 "uuid": "a55c48c8-e9f4-4f3b-8fa4-a49aefdc84c3", 00:17:01.950 "strip_size_kb": 64, 00:17:01.950 "state": "configuring", 00:17:01.950 "raid_level": "raid5f", 00:17:01.950 "superblock": true, 00:17:01.950 "num_base_bdevs": 3, 00:17:01.950 "num_base_bdevs_discovered": 0, 00:17:01.950 "num_base_bdevs_operational": 3, 00:17:01.950 "base_bdevs_list": [ 00:17:01.950 { 00:17:01.950 "name": "BaseBdev1", 00:17:01.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.950 "is_configured": false, 00:17:01.950 "data_offset": 0, 00:17:01.950 "data_size": 0 00:17:01.950 }, 00:17:01.950 { 00:17:01.950 "name": "BaseBdev2", 00:17:01.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.950 "is_configured": false, 00:17:01.950 "data_offset": 0, 00:17:01.950 "data_size": 0 00:17:01.950 }, 00:17:01.950 { 00:17:01.950 "name": "BaseBdev3", 00:17:01.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.950 "is_configured": false, 00:17:01.950 "data_offset": 0, 00:17:01.950 "data_size": 0 00:17:01.950 } 00:17:01.950 ] 00:17:01.950 }' 00:17:01.950 14:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.950 14:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.516 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:02.516 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.516 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.516 [2024-11-20 14:27:41.235462] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:02.517 [2024-11-20 14:27:41.235514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.517 [2024-11-20 14:27:41.243484] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:02.517 [2024-11-20 14:27:41.243539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:02.517 [2024-11-20 14:27:41.243554] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:02.517 [2024-11-20 14:27:41.243569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:02.517 [2024-11-20 14:27:41.243579] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:02.517 [2024-11-20 14:27:41.243593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.517 [2024-11-20 14:27:41.289346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:02.517 BaseBdev1 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.517 [ 00:17:02.517 { 00:17:02.517 "name": "BaseBdev1", 00:17:02.517 "aliases": [ 00:17:02.517 "6470621d-d9cf-4bf4-8774-6de0e8cc3ed7" 00:17:02.517 ], 00:17:02.517 "product_name": "Malloc disk", 00:17:02.517 "block_size": 512, 00:17:02.517 "num_blocks": 65536, 00:17:02.517 "uuid": "6470621d-d9cf-4bf4-8774-6de0e8cc3ed7", 00:17:02.517 "assigned_rate_limits": { 00:17:02.517 "rw_ios_per_sec": 0, 00:17:02.517 "rw_mbytes_per_sec": 0, 00:17:02.517 "r_mbytes_per_sec": 0, 00:17:02.517 "w_mbytes_per_sec": 0 00:17:02.517 }, 00:17:02.517 "claimed": true, 00:17:02.517 "claim_type": "exclusive_write", 00:17:02.517 "zoned": false, 00:17:02.517 "supported_io_types": { 00:17:02.517 "read": true, 00:17:02.517 "write": true, 00:17:02.517 "unmap": true, 00:17:02.517 "flush": true, 00:17:02.517 "reset": true, 00:17:02.517 "nvme_admin": false, 00:17:02.517 "nvme_io": false, 00:17:02.517 "nvme_io_md": false, 00:17:02.517 "write_zeroes": true, 00:17:02.517 "zcopy": true, 00:17:02.517 "get_zone_info": false, 00:17:02.517 "zone_management": false, 00:17:02.517 "zone_append": false, 00:17:02.517 "compare": false, 00:17:02.517 "compare_and_write": false, 00:17:02.517 "abort": true, 00:17:02.517 "seek_hole": false, 00:17:02.517 "seek_data": false, 00:17:02.517 "copy": true, 00:17:02.517 "nvme_iov_md": false 00:17:02.517 }, 00:17:02.517 "memory_domains": [ 00:17:02.517 { 00:17:02.517 "dma_device_id": "system", 00:17:02.517 "dma_device_type": 1 00:17:02.517 }, 00:17:02.517 { 00:17:02.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.517 "dma_device_type": 2 00:17:02.517 } 00:17:02.517 ], 00:17:02.517 "driver_specific": {} 00:17:02.517 } 00:17:02.517 ] 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.517 "name": "Existed_Raid", 00:17:02.517 "uuid": "926d6913-c446-4dd1-a054-97838ff358f3", 00:17:02.517 "strip_size_kb": 64, 00:17:02.517 "state": "configuring", 00:17:02.517 "raid_level": "raid5f", 00:17:02.517 "superblock": true, 00:17:02.517 "num_base_bdevs": 3, 00:17:02.517 "num_base_bdevs_discovered": 1, 00:17:02.517 "num_base_bdevs_operational": 3, 00:17:02.517 "base_bdevs_list": [ 00:17:02.517 { 00:17:02.517 "name": "BaseBdev1", 00:17:02.517 "uuid": "6470621d-d9cf-4bf4-8774-6de0e8cc3ed7", 00:17:02.517 "is_configured": true, 00:17:02.517 "data_offset": 2048, 00:17:02.517 "data_size": 63488 00:17:02.517 }, 00:17:02.517 { 00:17:02.517 "name": "BaseBdev2", 00:17:02.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.517 "is_configured": false, 00:17:02.517 "data_offset": 0, 00:17:02.517 "data_size": 0 00:17:02.517 }, 00:17:02.517 { 00:17:02.517 "name": "BaseBdev3", 00:17:02.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.517 "is_configured": false, 00:17:02.517 "data_offset": 0, 00:17:02.517 "data_size": 0 00:17:02.517 } 00:17:02.517 ] 00:17:02.517 }' 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.517 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.086 [2024-11-20 14:27:41.789541] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:03.086 [2024-11-20 14:27:41.789610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.086 [2024-11-20 14:27:41.797608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:03.086 [2024-11-20 14:27:41.800154] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:03.086 [2024-11-20 14:27:41.800206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:03.086 [2024-11-20 14:27:41.800223] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:03.086 [2024-11-20 14:27:41.800238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.086 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.086 "name": "Existed_Raid", 00:17:03.086 "uuid": "459131b5-bb33-40e9-ba28-45a30bdfa2ee", 00:17:03.086 "strip_size_kb": 64, 00:17:03.087 "state": "configuring", 00:17:03.087 "raid_level": "raid5f", 00:17:03.087 "superblock": true, 00:17:03.087 "num_base_bdevs": 3, 00:17:03.087 "num_base_bdevs_discovered": 1, 00:17:03.087 "num_base_bdevs_operational": 3, 00:17:03.087 "base_bdevs_list": [ 00:17:03.087 { 00:17:03.087 "name": "BaseBdev1", 00:17:03.087 "uuid": "6470621d-d9cf-4bf4-8774-6de0e8cc3ed7", 00:17:03.087 "is_configured": true, 00:17:03.087 "data_offset": 2048, 00:17:03.087 "data_size": 63488 00:17:03.087 }, 00:17:03.087 { 00:17:03.087 "name": "BaseBdev2", 00:17:03.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.087 "is_configured": false, 00:17:03.087 "data_offset": 0, 00:17:03.087 "data_size": 0 00:17:03.087 }, 00:17:03.087 { 00:17:03.087 "name": "BaseBdev3", 00:17:03.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.087 "is_configured": false, 00:17:03.087 "data_offset": 0, 00:17:03.087 "data_size": 0 00:17:03.087 } 00:17:03.087 ] 00:17:03.087 }' 00:17:03.087 14:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.087 14:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.346 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:03.346 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.346 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.604 [2024-11-20 14:27:42.341644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:03.604 BaseBdev2 00:17:03.604 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.604 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:03.604 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:03.604 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:03.604 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:03.604 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:03.604 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:03.604 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:03.604 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.604 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.604 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.604 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:03.604 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.604 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.604 [ 00:17:03.604 { 00:17:03.604 "name": "BaseBdev2", 00:17:03.604 "aliases": [ 00:17:03.604 "bef59553-9c8c-4754-a24b-afb8b8a6922d" 00:17:03.604 ], 00:17:03.604 "product_name": "Malloc disk", 00:17:03.604 "block_size": 512, 00:17:03.604 "num_blocks": 65536, 00:17:03.604 "uuid": "bef59553-9c8c-4754-a24b-afb8b8a6922d", 00:17:03.604 "assigned_rate_limits": { 00:17:03.604 "rw_ios_per_sec": 0, 00:17:03.604 "rw_mbytes_per_sec": 0, 00:17:03.604 "r_mbytes_per_sec": 0, 00:17:03.604 "w_mbytes_per_sec": 0 00:17:03.604 }, 00:17:03.604 "claimed": true, 00:17:03.604 "claim_type": "exclusive_write", 00:17:03.604 "zoned": false, 00:17:03.605 "supported_io_types": { 00:17:03.605 "read": true, 00:17:03.605 "write": true, 00:17:03.605 "unmap": true, 00:17:03.605 "flush": true, 00:17:03.605 "reset": true, 00:17:03.605 "nvme_admin": false, 00:17:03.605 "nvme_io": false, 00:17:03.605 "nvme_io_md": false, 00:17:03.605 "write_zeroes": true, 00:17:03.605 "zcopy": true, 00:17:03.605 "get_zone_info": false, 00:17:03.605 "zone_management": false, 00:17:03.605 "zone_append": false, 00:17:03.605 "compare": false, 00:17:03.605 "compare_and_write": false, 00:17:03.605 "abort": true, 00:17:03.605 "seek_hole": false, 00:17:03.605 "seek_data": false, 00:17:03.605 "copy": true, 00:17:03.605 "nvme_iov_md": false 00:17:03.605 }, 00:17:03.605 "memory_domains": [ 00:17:03.605 { 00:17:03.605 "dma_device_id": "system", 00:17:03.605 "dma_device_type": 1 00:17:03.605 }, 00:17:03.605 { 00:17:03.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.605 "dma_device_type": 2 00:17:03.605 } 00:17:03.605 ], 00:17:03.605 "driver_specific": {} 00:17:03.605 } 00:17:03.605 ] 00:17:03.605 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.605 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:03.605 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:03.605 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:03.605 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:03.605 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.605 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:03.605 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.605 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.605 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:03.605 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.605 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.605 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.605 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.605 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.605 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.605 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.605 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.605 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.605 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.605 "name": "Existed_Raid", 00:17:03.605 "uuid": "459131b5-bb33-40e9-ba28-45a30bdfa2ee", 00:17:03.605 "strip_size_kb": 64, 00:17:03.605 "state": "configuring", 00:17:03.605 "raid_level": "raid5f", 00:17:03.605 "superblock": true, 00:17:03.605 "num_base_bdevs": 3, 00:17:03.605 "num_base_bdevs_discovered": 2, 00:17:03.605 "num_base_bdevs_operational": 3, 00:17:03.605 "base_bdevs_list": [ 00:17:03.605 { 00:17:03.605 "name": "BaseBdev1", 00:17:03.605 "uuid": "6470621d-d9cf-4bf4-8774-6de0e8cc3ed7", 00:17:03.605 "is_configured": true, 00:17:03.605 "data_offset": 2048, 00:17:03.605 "data_size": 63488 00:17:03.605 }, 00:17:03.605 { 00:17:03.605 "name": "BaseBdev2", 00:17:03.605 "uuid": "bef59553-9c8c-4754-a24b-afb8b8a6922d", 00:17:03.605 "is_configured": true, 00:17:03.605 "data_offset": 2048, 00:17:03.605 "data_size": 63488 00:17:03.605 }, 00:17:03.605 { 00:17:03.605 "name": "BaseBdev3", 00:17:03.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.605 "is_configured": false, 00:17:03.605 "data_offset": 0, 00:17:03.605 "data_size": 0 00:17:03.605 } 00:17:03.605 ] 00:17:03.605 }' 00:17:03.605 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.605 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.173 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:04.173 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.173 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.173 [2024-11-20 14:27:42.957729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:04.173 [2024-11-20 14:27:42.958081] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:04.173 [2024-11-20 14:27:42.958111] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:04.173 BaseBdev3 00:17:04.173 [2024-11-20 14:27:42.958437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:04.173 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.173 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:04.173 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:04.173 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:04.173 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:04.173 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:04.173 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:04.173 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:04.173 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.173 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.173 [2024-11-20 14:27:42.963759] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:04.173 [2024-11-20 14:27:42.963793] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:04.173 [2024-11-20 14:27:42.964032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.173 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.173 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:04.174 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.174 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.174 [ 00:17:04.174 { 00:17:04.174 "name": "BaseBdev3", 00:17:04.174 "aliases": [ 00:17:04.174 "70ab15e1-6c9f-41d7-9962-757c01961e76" 00:17:04.174 ], 00:17:04.174 "product_name": "Malloc disk", 00:17:04.174 "block_size": 512, 00:17:04.174 "num_blocks": 65536, 00:17:04.174 "uuid": "70ab15e1-6c9f-41d7-9962-757c01961e76", 00:17:04.174 "assigned_rate_limits": { 00:17:04.174 "rw_ios_per_sec": 0, 00:17:04.174 "rw_mbytes_per_sec": 0, 00:17:04.174 "r_mbytes_per_sec": 0, 00:17:04.174 "w_mbytes_per_sec": 0 00:17:04.174 }, 00:17:04.174 "claimed": true, 00:17:04.174 "claim_type": "exclusive_write", 00:17:04.174 "zoned": false, 00:17:04.174 "supported_io_types": { 00:17:04.174 "read": true, 00:17:04.174 "write": true, 00:17:04.174 "unmap": true, 00:17:04.174 "flush": true, 00:17:04.174 "reset": true, 00:17:04.174 "nvme_admin": false, 00:17:04.174 "nvme_io": false, 00:17:04.174 "nvme_io_md": false, 00:17:04.174 "write_zeroes": true, 00:17:04.174 "zcopy": true, 00:17:04.174 "get_zone_info": false, 00:17:04.174 "zone_management": false, 00:17:04.174 "zone_append": false, 00:17:04.174 "compare": false, 00:17:04.174 "compare_and_write": false, 00:17:04.174 "abort": true, 00:17:04.174 "seek_hole": false, 00:17:04.174 "seek_data": false, 00:17:04.174 "copy": true, 00:17:04.174 "nvme_iov_md": false 00:17:04.174 }, 00:17:04.174 "memory_domains": [ 00:17:04.174 { 00:17:04.174 "dma_device_id": "system", 00:17:04.174 "dma_device_type": 1 00:17:04.174 }, 00:17:04.174 { 00:17:04.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.174 "dma_device_type": 2 00:17:04.174 } 00:17:04.174 ], 00:17:04.174 "driver_specific": {} 00:17:04.174 } 00:17:04.174 ] 00:17:04.174 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.174 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:04.174 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:04.174 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:04.174 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:04.174 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:04.174 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.174 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:04.174 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:04.174 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:04.174 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.174 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.174 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.174 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.174 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.174 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.174 14:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.174 14:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.174 14:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.174 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.174 "name": "Existed_Raid", 00:17:04.174 "uuid": "459131b5-bb33-40e9-ba28-45a30bdfa2ee", 00:17:04.174 "strip_size_kb": 64, 00:17:04.174 "state": "online", 00:17:04.174 "raid_level": "raid5f", 00:17:04.174 "superblock": true, 00:17:04.174 "num_base_bdevs": 3, 00:17:04.174 "num_base_bdevs_discovered": 3, 00:17:04.174 "num_base_bdevs_operational": 3, 00:17:04.174 "base_bdevs_list": [ 00:17:04.174 { 00:17:04.174 "name": "BaseBdev1", 00:17:04.174 "uuid": "6470621d-d9cf-4bf4-8774-6de0e8cc3ed7", 00:17:04.174 "is_configured": true, 00:17:04.174 "data_offset": 2048, 00:17:04.174 "data_size": 63488 00:17:04.174 }, 00:17:04.174 { 00:17:04.174 "name": "BaseBdev2", 00:17:04.174 "uuid": "bef59553-9c8c-4754-a24b-afb8b8a6922d", 00:17:04.174 "is_configured": true, 00:17:04.174 "data_offset": 2048, 00:17:04.174 "data_size": 63488 00:17:04.174 }, 00:17:04.174 { 00:17:04.174 "name": "BaseBdev3", 00:17:04.174 "uuid": "70ab15e1-6c9f-41d7-9962-757c01961e76", 00:17:04.174 "is_configured": true, 00:17:04.174 "data_offset": 2048, 00:17:04.174 "data_size": 63488 00:17:04.174 } 00:17:04.174 ] 00:17:04.174 }' 00:17:04.174 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.174 14:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:04.741 [2024-11-20 14:27:43.493967] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:04.741 "name": "Existed_Raid", 00:17:04.741 "aliases": [ 00:17:04.741 "459131b5-bb33-40e9-ba28-45a30bdfa2ee" 00:17:04.741 ], 00:17:04.741 "product_name": "Raid Volume", 00:17:04.741 "block_size": 512, 00:17:04.741 "num_blocks": 126976, 00:17:04.741 "uuid": "459131b5-bb33-40e9-ba28-45a30bdfa2ee", 00:17:04.741 "assigned_rate_limits": { 00:17:04.741 "rw_ios_per_sec": 0, 00:17:04.741 "rw_mbytes_per_sec": 0, 00:17:04.741 "r_mbytes_per_sec": 0, 00:17:04.741 "w_mbytes_per_sec": 0 00:17:04.741 }, 00:17:04.741 "claimed": false, 00:17:04.741 "zoned": false, 00:17:04.741 "supported_io_types": { 00:17:04.741 "read": true, 00:17:04.741 "write": true, 00:17:04.741 "unmap": false, 00:17:04.741 "flush": false, 00:17:04.741 "reset": true, 00:17:04.741 "nvme_admin": false, 00:17:04.741 "nvme_io": false, 00:17:04.741 "nvme_io_md": false, 00:17:04.741 "write_zeroes": true, 00:17:04.741 "zcopy": false, 00:17:04.741 "get_zone_info": false, 00:17:04.741 "zone_management": false, 00:17:04.741 "zone_append": false, 00:17:04.741 "compare": false, 00:17:04.741 "compare_and_write": false, 00:17:04.741 "abort": false, 00:17:04.741 "seek_hole": false, 00:17:04.741 "seek_data": false, 00:17:04.741 "copy": false, 00:17:04.741 "nvme_iov_md": false 00:17:04.741 }, 00:17:04.741 "driver_specific": { 00:17:04.741 "raid": { 00:17:04.741 "uuid": "459131b5-bb33-40e9-ba28-45a30bdfa2ee", 00:17:04.741 "strip_size_kb": 64, 00:17:04.741 "state": "online", 00:17:04.741 "raid_level": "raid5f", 00:17:04.741 "superblock": true, 00:17:04.741 "num_base_bdevs": 3, 00:17:04.741 "num_base_bdevs_discovered": 3, 00:17:04.741 "num_base_bdevs_operational": 3, 00:17:04.741 "base_bdevs_list": [ 00:17:04.741 { 00:17:04.741 "name": "BaseBdev1", 00:17:04.741 "uuid": "6470621d-d9cf-4bf4-8774-6de0e8cc3ed7", 00:17:04.741 "is_configured": true, 00:17:04.741 "data_offset": 2048, 00:17:04.741 "data_size": 63488 00:17:04.741 }, 00:17:04.741 { 00:17:04.741 "name": "BaseBdev2", 00:17:04.741 "uuid": "bef59553-9c8c-4754-a24b-afb8b8a6922d", 00:17:04.741 "is_configured": true, 00:17:04.741 "data_offset": 2048, 00:17:04.741 "data_size": 63488 00:17:04.741 }, 00:17:04.741 { 00:17:04.741 "name": "BaseBdev3", 00:17:04.741 "uuid": "70ab15e1-6c9f-41d7-9962-757c01961e76", 00:17:04.741 "is_configured": true, 00:17:04.741 "data_offset": 2048, 00:17:04.741 "data_size": 63488 00:17:04.741 } 00:17:04.741 ] 00:17:04.741 } 00:17:04.741 } 00:17:04.741 }' 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:04.741 BaseBdev2 00:17:04.741 BaseBdev3' 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.741 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.742 14:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.000 [2024-11-20 14:27:43.809904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.000 "name": "Existed_Raid", 00:17:05.000 "uuid": "459131b5-bb33-40e9-ba28-45a30bdfa2ee", 00:17:05.000 "strip_size_kb": 64, 00:17:05.000 "state": "online", 00:17:05.000 "raid_level": "raid5f", 00:17:05.000 "superblock": true, 00:17:05.000 "num_base_bdevs": 3, 00:17:05.000 "num_base_bdevs_discovered": 2, 00:17:05.000 "num_base_bdevs_operational": 2, 00:17:05.000 "base_bdevs_list": [ 00:17:05.000 { 00:17:05.000 "name": null, 00:17:05.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.000 "is_configured": false, 00:17:05.000 "data_offset": 0, 00:17:05.000 "data_size": 63488 00:17:05.000 }, 00:17:05.000 { 00:17:05.000 "name": "BaseBdev2", 00:17:05.000 "uuid": "bef59553-9c8c-4754-a24b-afb8b8a6922d", 00:17:05.000 "is_configured": true, 00:17:05.000 "data_offset": 2048, 00:17:05.000 "data_size": 63488 00:17:05.000 }, 00:17:05.000 { 00:17:05.000 "name": "BaseBdev3", 00:17:05.000 "uuid": "70ab15e1-6c9f-41d7-9962-757c01961e76", 00:17:05.000 "is_configured": true, 00:17:05.000 "data_offset": 2048, 00:17:05.000 "data_size": 63488 00:17:05.000 } 00:17:05.000 ] 00:17:05.000 }' 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.000 14:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.566 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:05.566 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:05.566 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:05.566 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.567 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.567 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.567 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.567 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:05.567 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:05.567 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:05.567 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.567 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.567 [2024-11-20 14:27:44.457776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:05.567 [2024-11-20 14:27:44.457971] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:05.567 [2024-11-20 14:27:44.542922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.567 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.567 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:05.567 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:05.825 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.825 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:05.825 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.826 [2024-11-20 14:27:44.607041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:05.826 [2024-11-20 14:27:44.607114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.826 BaseBdev2 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:05.826 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.085 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.085 [ 00:17:06.085 { 00:17:06.085 "name": "BaseBdev2", 00:17:06.085 "aliases": [ 00:17:06.085 "01909527-9b05-4ba4-beaa-dce542f8687c" 00:17:06.085 ], 00:17:06.085 "product_name": "Malloc disk", 00:17:06.085 "block_size": 512, 00:17:06.085 "num_blocks": 65536, 00:17:06.085 "uuid": "01909527-9b05-4ba4-beaa-dce542f8687c", 00:17:06.085 "assigned_rate_limits": { 00:17:06.085 "rw_ios_per_sec": 0, 00:17:06.085 "rw_mbytes_per_sec": 0, 00:17:06.085 "r_mbytes_per_sec": 0, 00:17:06.085 "w_mbytes_per_sec": 0 00:17:06.085 }, 00:17:06.085 "claimed": false, 00:17:06.085 "zoned": false, 00:17:06.085 "supported_io_types": { 00:17:06.085 "read": true, 00:17:06.085 "write": true, 00:17:06.085 "unmap": true, 00:17:06.085 "flush": true, 00:17:06.085 "reset": true, 00:17:06.085 "nvme_admin": false, 00:17:06.085 "nvme_io": false, 00:17:06.085 "nvme_io_md": false, 00:17:06.085 "write_zeroes": true, 00:17:06.085 "zcopy": true, 00:17:06.085 "get_zone_info": false, 00:17:06.085 "zone_management": false, 00:17:06.085 "zone_append": false, 00:17:06.085 "compare": false, 00:17:06.085 "compare_and_write": false, 00:17:06.085 "abort": true, 00:17:06.085 "seek_hole": false, 00:17:06.085 "seek_data": false, 00:17:06.085 "copy": true, 00:17:06.085 "nvme_iov_md": false 00:17:06.085 }, 00:17:06.085 "memory_domains": [ 00:17:06.085 { 00:17:06.085 "dma_device_id": "system", 00:17:06.085 "dma_device_type": 1 00:17:06.085 }, 00:17:06.085 { 00:17:06.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.085 "dma_device_type": 2 00:17:06.085 } 00:17:06.085 ], 00:17:06.085 "driver_specific": {} 00:17:06.085 } 00:17:06.085 ] 00:17:06.085 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.085 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:06.085 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:06.085 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:06.085 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:06.085 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.085 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.085 BaseBdev3 00:17:06.085 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.085 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:06.085 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:06.085 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:06.085 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:06.085 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:06.085 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.086 [ 00:17:06.086 { 00:17:06.086 "name": "BaseBdev3", 00:17:06.086 "aliases": [ 00:17:06.086 "168a07c0-6792-4f8f-9e90-cc34f8b9c2d0" 00:17:06.086 ], 00:17:06.086 "product_name": "Malloc disk", 00:17:06.086 "block_size": 512, 00:17:06.086 "num_blocks": 65536, 00:17:06.086 "uuid": "168a07c0-6792-4f8f-9e90-cc34f8b9c2d0", 00:17:06.086 "assigned_rate_limits": { 00:17:06.086 "rw_ios_per_sec": 0, 00:17:06.086 "rw_mbytes_per_sec": 0, 00:17:06.086 "r_mbytes_per_sec": 0, 00:17:06.086 "w_mbytes_per_sec": 0 00:17:06.086 }, 00:17:06.086 "claimed": false, 00:17:06.086 "zoned": false, 00:17:06.086 "supported_io_types": { 00:17:06.086 "read": true, 00:17:06.086 "write": true, 00:17:06.086 "unmap": true, 00:17:06.086 "flush": true, 00:17:06.086 "reset": true, 00:17:06.086 "nvme_admin": false, 00:17:06.086 "nvme_io": false, 00:17:06.086 "nvme_io_md": false, 00:17:06.086 "write_zeroes": true, 00:17:06.086 "zcopy": true, 00:17:06.086 "get_zone_info": false, 00:17:06.086 "zone_management": false, 00:17:06.086 "zone_append": false, 00:17:06.086 "compare": false, 00:17:06.086 "compare_and_write": false, 00:17:06.086 "abort": true, 00:17:06.086 "seek_hole": false, 00:17:06.086 "seek_data": false, 00:17:06.086 "copy": true, 00:17:06.086 "nvme_iov_md": false 00:17:06.086 }, 00:17:06.086 "memory_domains": [ 00:17:06.086 { 00:17:06.086 "dma_device_id": "system", 00:17:06.086 "dma_device_type": 1 00:17:06.086 }, 00:17:06.086 { 00:17:06.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.086 "dma_device_type": 2 00:17:06.086 } 00:17:06.086 ], 00:17:06.086 "driver_specific": {} 00:17:06.086 } 00:17:06.086 ] 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.086 [2024-11-20 14:27:44.902333] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:06.086 [2024-11-20 14:27:44.902418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:06.086 [2024-11-20 14:27:44.902452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:06.086 [2024-11-20 14:27:44.905174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.086 "name": "Existed_Raid", 00:17:06.086 "uuid": "5adb07b3-b729-4705-9057-91dfca0c8f5c", 00:17:06.086 "strip_size_kb": 64, 00:17:06.086 "state": "configuring", 00:17:06.086 "raid_level": "raid5f", 00:17:06.086 "superblock": true, 00:17:06.086 "num_base_bdevs": 3, 00:17:06.086 "num_base_bdevs_discovered": 2, 00:17:06.086 "num_base_bdevs_operational": 3, 00:17:06.086 "base_bdevs_list": [ 00:17:06.086 { 00:17:06.086 "name": "BaseBdev1", 00:17:06.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.086 "is_configured": false, 00:17:06.086 "data_offset": 0, 00:17:06.086 "data_size": 0 00:17:06.086 }, 00:17:06.086 { 00:17:06.086 "name": "BaseBdev2", 00:17:06.086 "uuid": "01909527-9b05-4ba4-beaa-dce542f8687c", 00:17:06.086 "is_configured": true, 00:17:06.086 "data_offset": 2048, 00:17:06.086 "data_size": 63488 00:17:06.086 }, 00:17:06.086 { 00:17:06.086 "name": "BaseBdev3", 00:17:06.086 "uuid": "168a07c0-6792-4f8f-9e90-cc34f8b9c2d0", 00:17:06.086 "is_configured": true, 00:17:06.086 "data_offset": 2048, 00:17:06.086 "data_size": 63488 00:17:06.086 } 00:17:06.086 ] 00:17:06.086 }' 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.086 14:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.653 14:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:06.653 14:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.653 14:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.653 [2024-11-20 14:27:45.434501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:06.653 14:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.653 14:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:06.653 14:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.653 14:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.653 14:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.653 14:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.653 14:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:06.653 14:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.653 14:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.653 14:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.653 14:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.653 14:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.653 14:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.653 14:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.653 14:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.653 14:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.653 14:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.653 "name": "Existed_Raid", 00:17:06.653 "uuid": "5adb07b3-b729-4705-9057-91dfca0c8f5c", 00:17:06.653 "strip_size_kb": 64, 00:17:06.653 "state": "configuring", 00:17:06.653 "raid_level": "raid5f", 00:17:06.653 "superblock": true, 00:17:06.653 "num_base_bdevs": 3, 00:17:06.653 "num_base_bdevs_discovered": 1, 00:17:06.653 "num_base_bdevs_operational": 3, 00:17:06.654 "base_bdevs_list": [ 00:17:06.654 { 00:17:06.654 "name": "BaseBdev1", 00:17:06.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.654 "is_configured": false, 00:17:06.654 "data_offset": 0, 00:17:06.654 "data_size": 0 00:17:06.654 }, 00:17:06.654 { 00:17:06.654 "name": null, 00:17:06.654 "uuid": "01909527-9b05-4ba4-beaa-dce542f8687c", 00:17:06.654 "is_configured": false, 00:17:06.654 "data_offset": 0, 00:17:06.654 "data_size": 63488 00:17:06.654 }, 00:17:06.654 { 00:17:06.654 "name": "BaseBdev3", 00:17:06.654 "uuid": "168a07c0-6792-4f8f-9e90-cc34f8b9c2d0", 00:17:06.654 "is_configured": true, 00:17:06.654 "data_offset": 2048, 00:17:06.654 "data_size": 63488 00:17:06.654 } 00:17:06.654 ] 00:17:06.654 }' 00:17:06.654 14:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.654 14:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.221 14:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:07.221 14:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.221 14:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.221 14:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.221 [2024-11-20 14:27:46.075537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:07.221 BaseBdev1 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.221 [ 00:17:07.221 { 00:17:07.221 "name": "BaseBdev1", 00:17:07.221 "aliases": [ 00:17:07.221 "7f55404a-82c2-4e02-9c22-f03a3a564ea5" 00:17:07.221 ], 00:17:07.221 "product_name": "Malloc disk", 00:17:07.221 "block_size": 512, 00:17:07.221 "num_blocks": 65536, 00:17:07.221 "uuid": "7f55404a-82c2-4e02-9c22-f03a3a564ea5", 00:17:07.221 "assigned_rate_limits": { 00:17:07.221 "rw_ios_per_sec": 0, 00:17:07.221 "rw_mbytes_per_sec": 0, 00:17:07.221 "r_mbytes_per_sec": 0, 00:17:07.221 "w_mbytes_per_sec": 0 00:17:07.221 }, 00:17:07.221 "claimed": true, 00:17:07.221 "claim_type": "exclusive_write", 00:17:07.221 "zoned": false, 00:17:07.221 "supported_io_types": { 00:17:07.221 "read": true, 00:17:07.221 "write": true, 00:17:07.221 "unmap": true, 00:17:07.221 "flush": true, 00:17:07.221 "reset": true, 00:17:07.221 "nvme_admin": false, 00:17:07.221 "nvme_io": false, 00:17:07.221 "nvme_io_md": false, 00:17:07.221 "write_zeroes": true, 00:17:07.221 "zcopy": true, 00:17:07.221 "get_zone_info": false, 00:17:07.221 "zone_management": false, 00:17:07.221 "zone_append": false, 00:17:07.221 "compare": false, 00:17:07.221 "compare_and_write": false, 00:17:07.221 "abort": true, 00:17:07.221 "seek_hole": false, 00:17:07.221 "seek_data": false, 00:17:07.221 "copy": true, 00:17:07.221 "nvme_iov_md": false 00:17:07.221 }, 00:17:07.221 "memory_domains": [ 00:17:07.221 { 00:17:07.221 "dma_device_id": "system", 00:17:07.221 "dma_device_type": 1 00:17:07.221 }, 00:17:07.221 { 00:17:07.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.221 "dma_device_type": 2 00:17:07.221 } 00:17:07.221 ], 00:17:07.221 "driver_specific": {} 00:17:07.221 } 00:17:07.221 ] 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.221 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.222 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.222 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.222 "name": "Existed_Raid", 00:17:07.222 "uuid": "5adb07b3-b729-4705-9057-91dfca0c8f5c", 00:17:07.222 "strip_size_kb": 64, 00:17:07.222 "state": "configuring", 00:17:07.222 "raid_level": "raid5f", 00:17:07.222 "superblock": true, 00:17:07.222 "num_base_bdevs": 3, 00:17:07.222 "num_base_bdevs_discovered": 2, 00:17:07.222 "num_base_bdevs_operational": 3, 00:17:07.222 "base_bdevs_list": [ 00:17:07.222 { 00:17:07.222 "name": "BaseBdev1", 00:17:07.222 "uuid": "7f55404a-82c2-4e02-9c22-f03a3a564ea5", 00:17:07.222 "is_configured": true, 00:17:07.222 "data_offset": 2048, 00:17:07.222 "data_size": 63488 00:17:07.222 }, 00:17:07.222 { 00:17:07.222 "name": null, 00:17:07.222 "uuid": "01909527-9b05-4ba4-beaa-dce542f8687c", 00:17:07.222 "is_configured": false, 00:17:07.222 "data_offset": 0, 00:17:07.222 "data_size": 63488 00:17:07.222 }, 00:17:07.222 { 00:17:07.222 "name": "BaseBdev3", 00:17:07.222 "uuid": "168a07c0-6792-4f8f-9e90-cc34f8b9c2d0", 00:17:07.222 "is_configured": true, 00:17:07.222 "data_offset": 2048, 00:17:07.222 "data_size": 63488 00:17:07.222 } 00:17:07.222 ] 00:17:07.222 }' 00:17:07.222 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.222 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.789 [2024-11-20 14:27:46.671859] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.789 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.789 "name": "Existed_Raid", 00:17:07.789 "uuid": "5adb07b3-b729-4705-9057-91dfca0c8f5c", 00:17:07.789 "strip_size_kb": 64, 00:17:07.789 "state": "configuring", 00:17:07.789 "raid_level": "raid5f", 00:17:07.789 "superblock": true, 00:17:07.789 "num_base_bdevs": 3, 00:17:07.789 "num_base_bdevs_discovered": 1, 00:17:07.789 "num_base_bdevs_operational": 3, 00:17:07.789 "base_bdevs_list": [ 00:17:07.789 { 00:17:07.789 "name": "BaseBdev1", 00:17:07.789 "uuid": "7f55404a-82c2-4e02-9c22-f03a3a564ea5", 00:17:07.789 "is_configured": true, 00:17:07.789 "data_offset": 2048, 00:17:07.789 "data_size": 63488 00:17:07.789 }, 00:17:07.789 { 00:17:07.789 "name": null, 00:17:07.789 "uuid": "01909527-9b05-4ba4-beaa-dce542f8687c", 00:17:07.789 "is_configured": false, 00:17:07.789 "data_offset": 0, 00:17:07.789 "data_size": 63488 00:17:07.789 }, 00:17:07.789 { 00:17:07.789 "name": null, 00:17:07.789 "uuid": "168a07c0-6792-4f8f-9e90-cc34f8b9c2d0", 00:17:07.790 "is_configured": false, 00:17:07.790 "data_offset": 0, 00:17:07.790 "data_size": 63488 00:17:07.790 } 00:17:07.790 ] 00:17:07.790 }' 00:17:07.790 14:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.790 14:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.356 [2024-11-20 14:27:47.256046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.356 "name": "Existed_Raid", 00:17:08.356 "uuid": "5adb07b3-b729-4705-9057-91dfca0c8f5c", 00:17:08.356 "strip_size_kb": 64, 00:17:08.356 "state": "configuring", 00:17:08.356 "raid_level": "raid5f", 00:17:08.356 "superblock": true, 00:17:08.356 "num_base_bdevs": 3, 00:17:08.356 "num_base_bdevs_discovered": 2, 00:17:08.356 "num_base_bdevs_operational": 3, 00:17:08.356 "base_bdevs_list": [ 00:17:08.356 { 00:17:08.356 "name": "BaseBdev1", 00:17:08.356 "uuid": "7f55404a-82c2-4e02-9c22-f03a3a564ea5", 00:17:08.356 "is_configured": true, 00:17:08.356 "data_offset": 2048, 00:17:08.356 "data_size": 63488 00:17:08.356 }, 00:17:08.356 { 00:17:08.356 "name": null, 00:17:08.356 "uuid": "01909527-9b05-4ba4-beaa-dce542f8687c", 00:17:08.356 "is_configured": false, 00:17:08.356 "data_offset": 0, 00:17:08.356 "data_size": 63488 00:17:08.356 }, 00:17:08.356 { 00:17:08.356 "name": "BaseBdev3", 00:17:08.356 "uuid": "168a07c0-6792-4f8f-9e90-cc34f8b9c2d0", 00:17:08.356 "is_configured": true, 00:17:08.356 "data_offset": 2048, 00:17:08.356 "data_size": 63488 00:17:08.356 } 00:17:08.356 ] 00:17:08.356 }' 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.356 14:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.996 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.996 14:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.996 14:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.996 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:08.996 14:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.996 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:08.996 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:08.996 14:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.996 14:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.996 [2024-11-20 14:27:47.828281] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:08.996 14:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.996 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:08.996 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.996 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:08.996 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.997 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.997 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.997 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.997 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.997 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.997 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.997 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.997 14:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.997 14:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.997 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.997 14:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.997 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.997 "name": "Existed_Raid", 00:17:08.997 "uuid": "5adb07b3-b729-4705-9057-91dfca0c8f5c", 00:17:08.997 "strip_size_kb": 64, 00:17:08.997 "state": "configuring", 00:17:08.997 "raid_level": "raid5f", 00:17:08.997 "superblock": true, 00:17:08.997 "num_base_bdevs": 3, 00:17:08.997 "num_base_bdevs_discovered": 1, 00:17:08.997 "num_base_bdevs_operational": 3, 00:17:08.997 "base_bdevs_list": [ 00:17:08.997 { 00:17:08.997 "name": null, 00:17:08.997 "uuid": "7f55404a-82c2-4e02-9c22-f03a3a564ea5", 00:17:08.997 "is_configured": false, 00:17:08.997 "data_offset": 0, 00:17:08.997 "data_size": 63488 00:17:08.997 }, 00:17:08.997 { 00:17:08.997 "name": null, 00:17:08.997 "uuid": "01909527-9b05-4ba4-beaa-dce542f8687c", 00:17:08.997 "is_configured": false, 00:17:08.997 "data_offset": 0, 00:17:08.997 "data_size": 63488 00:17:08.997 }, 00:17:08.997 { 00:17:08.997 "name": "BaseBdev3", 00:17:08.997 "uuid": "168a07c0-6792-4f8f-9e90-cc34f8b9c2d0", 00:17:08.997 "is_configured": true, 00:17:08.997 "data_offset": 2048, 00:17:08.997 "data_size": 63488 00:17:08.997 } 00:17:08.997 ] 00:17:08.997 }' 00:17:08.997 14:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.997 14:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.562 14:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.562 14:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:09.562 14:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.562 14:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.562 14:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.562 14:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:09.562 14:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:09.562 14:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.563 14:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.563 [2024-11-20 14:27:48.488794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:09.563 14:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.563 14:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:09.563 14:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:09.563 14:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:09.563 14:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.563 14:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.563 14:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.563 14:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.563 14:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.563 14:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.563 14:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.563 14:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.563 14:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.563 14:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.563 14:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.563 14:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.821 14:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.821 "name": "Existed_Raid", 00:17:09.821 "uuid": "5adb07b3-b729-4705-9057-91dfca0c8f5c", 00:17:09.821 "strip_size_kb": 64, 00:17:09.821 "state": "configuring", 00:17:09.821 "raid_level": "raid5f", 00:17:09.821 "superblock": true, 00:17:09.821 "num_base_bdevs": 3, 00:17:09.821 "num_base_bdevs_discovered": 2, 00:17:09.821 "num_base_bdevs_operational": 3, 00:17:09.821 "base_bdevs_list": [ 00:17:09.821 { 00:17:09.821 "name": null, 00:17:09.821 "uuid": "7f55404a-82c2-4e02-9c22-f03a3a564ea5", 00:17:09.821 "is_configured": false, 00:17:09.821 "data_offset": 0, 00:17:09.821 "data_size": 63488 00:17:09.821 }, 00:17:09.821 { 00:17:09.821 "name": "BaseBdev2", 00:17:09.821 "uuid": "01909527-9b05-4ba4-beaa-dce542f8687c", 00:17:09.821 "is_configured": true, 00:17:09.822 "data_offset": 2048, 00:17:09.822 "data_size": 63488 00:17:09.822 }, 00:17:09.822 { 00:17:09.822 "name": "BaseBdev3", 00:17:09.822 "uuid": "168a07c0-6792-4f8f-9e90-cc34f8b9c2d0", 00:17:09.822 "is_configured": true, 00:17:09.822 "data_offset": 2048, 00:17:09.822 "data_size": 63488 00:17:09.822 } 00:17:09.822 ] 00:17:09.822 }' 00:17:09.822 14:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.822 14:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.080 14:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:10.080 14:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.080 14:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.080 14:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.080 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.080 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:10.080 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.080 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.080 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.080 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:10.080 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.340 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7f55404a-82c2-4e02-9c22-f03a3a564ea5 00:17:10.340 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.340 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.340 [2024-11-20 14:27:49.126816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:10.340 [2024-11-20 14:27:49.127113] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:10.340 [2024-11-20 14:27:49.127138] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:10.340 [2024-11-20 14:27:49.127475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:10.340 NewBaseBdev 00:17:10.340 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.340 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:10.340 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:10.340 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:10.340 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:10.340 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:10.340 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:10.340 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:10.340 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.340 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.340 [2024-11-20 14:27:49.132365] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:10.340 [2024-11-20 14:27:49.132396] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:10.340 [2024-11-20 14:27:49.132708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.340 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.340 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:10.340 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.340 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.340 [ 00:17:10.340 { 00:17:10.340 "name": "NewBaseBdev", 00:17:10.340 "aliases": [ 00:17:10.340 "7f55404a-82c2-4e02-9c22-f03a3a564ea5" 00:17:10.340 ], 00:17:10.340 "product_name": "Malloc disk", 00:17:10.340 "block_size": 512, 00:17:10.340 "num_blocks": 65536, 00:17:10.340 "uuid": "7f55404a-82c2-4e02-9c22-f03a3a564ea5", 00:17:10.340 "assigned_rate_limits": { 00:17:10.340 "rw_ios_per_sec": 0, 00:17:10.340 "rw_mbytes_per_sec": 0, 00:17:10.341 "r_mbytes_per_sec": 0, 00:17:10.341 "w_mbytes_per_sec": 0 00:17:10.341 }, 00:17:10.341 "claimed": true, 00:17:10.341 "claim_type": "exclusive_write", 00:17:10.341 "zoned": false, 00:17:10.341 "supported_io_types": { 00:17:10.341 "read": true, 00:17:10.341 "write": true, 00:17:10.341 "unmap": true, 00:17:10.341 "flush": true, 00:17:10.341 "reset": true, 00:17:10.341 "nvme_admin": false, 00:17:10.341 "nvme_io": false, 00:17:10.341 "nvme_io_md": false, 00:17:10.341 "write_zeroes": true, 00:17:10.341 "zcopy": true, 00:17:10.341 "get_zone_info": false, 00:17:10.341 "zone_management": false, 00:17:10.341 "zone_append": false, 00:17:10.341 "compare": false, 00:17:10.341 "compare_and_write": false, 00:17:10.341 "abort": true, 00:17:10.341 "seek_hole": false, 00:17:10.341 "seek_data": false, 00:17:10.341 "copy": true, 00:17:10.341 "nvme_iov_md": false 00:17:10.341 }, 00:17:10.341 "memory_domains": [ 00:17:10.341 { 00:17:10.341 "dma_device_id": "system", 00:17:10.341 "dma_device_type": 1 00:17:10.341 }, 00:17:10.341 { 00:17:10.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.341 "dma_device_type": 2 00:17:10.341 } 00:17:10.341 ], 00:17:10.341 "driver_specific": {} 00:17:10.341 } 00:17:10.341 ] 00:17:10.341 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.341 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:10.341 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:10.341 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:10.341 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.341 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.341 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.341 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:10.341 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.341 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.341 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.341 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.341 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.341 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.341 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.341 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.341 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.341 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.341 "name": "Existed_Raid", 00:17:10.341 "uuid": "5adb07b3-b729-4705-9057-91dfca0c8f5c", 00:17:10.341 "strip_size_kb": 64, 00:17:10.341 "state": "online", 00:17:10.341 "raid_level": "raid5f", 00:17:10.341 "superblock": true, 00:17:10.341 "num_base_bdevs": 3, 00:17:10.341 "num_base_bdevs_discovered": 3, 00:17:10.341 "num_base_bdevs_operational": 3, 00:17:10.341 "base_bdevs_list": [ 00:17:10.341 { 00:17:10.341 "name": "NewBaseBdev", 00:17:10.341 "uuid": "7f55404a-82c2-4e02-9c22-f03a3a564ea5", 00:17:10.341 "is_configured": true, 00:17:10.341 "data_offset": 2048, 00:17:10.341 "data_size": 63488 00:17:10.341 }, 00:17:10.341 { 00:17:10.341 "name": "BaseBdev2", 00:17:10.341 "uuid": "01909527-9b05-4ba4-beaa-dce542f8687c", 00:17:10.341 "is_configured": true, 00:17:10.341 "data_offset": 2048, 00:17:10.341 "data_size": 63488 00:17:10.341 }, 00:17:10.341 { 00:17:10.341 "name": "BaseBdev3", 00:17:10.341 "uuid": "168a07c0-6792-4f8f-9e90-cc34f8b9c2d0", 00:17:10.341 "is_configured": true, 00:17:10.341 "data_offset": 2048, 00:17:10.341 "data_size": 63488 00:17:10.341 } 00:17:10.341 ] 00:17:10.341 }' 00:17:10.341 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.341 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.909 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:10.909 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:10.909 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:10.909 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:10.909 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:10.909 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:10.909 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:10.909 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:10.909 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.909 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.909 [2024-11-20 14:27:49.698687] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:10.909 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.909 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:10.909 "name": "Existed_Raid", 00:17:10.909 "aliases": [ 00:17:10.909 "5adb07b3-b729-4705-9057-91dfca0c8f5c" 00:17:10.909 ], 00:17:10.909 "product_name": "Raid Volume", 00:17:10.909 "block_size": 512, 00:17:10.909 "num_blocks": 126976, 00:17:10.909 "uuid": "5adb07b3-b729-4705-9057-91dfca0c8f5c", 00:17:10.909 "assigned_rate_limits": { 00:17:10.909 "rw_ios_per_sec": 0, 00:17:10.909 "rw_mbytes_per_sec": 0, 00:17:10.909 "r_mbytes_per_sec": 0, 00:17:10.909 "w_mbytes_per_sec": 0 00:17:10.909 }, 00:17:10.909 "claimed": false, 00:17:10.909 "zoned": false, 00:17:10.909 "supported_io_types": { 00:17:10.909 "read": true, 00:17:10.909 "write": true, 00:17:10.909 "unmap": false, 00:17:10.909 "flush": false, 00:17:10.909 "reset": true, 00:17:10.909 "nvme_admin": false, 00:17:10.909 "nvme_io": false, 00:17:10.909 "nvme_io_md": false, 00:17:10.909 "write_zeroes": true, 00:17:10.909 "zcopy": false, 00:17:10.909 "get_zone_info": false, 00:17:10.909 "zone_management": false, 00:17:10.909 "zone_append": false, 00:17:10.909 "compare": false, 00:17:10.909 "compare_and_write": false, 00:17:10.909 "abort": false, 00:17:10.909 "seek_hole": false, 00:17:10.909 "seek_data": false, 00:17:10.909 "copy": false, 00:17:10.909 "nvme_iov_md": false 00:17:10.909 }, 00:17:10.909 "driver_specific": { 00:17:10.909 "raid": { 00:17:10.910 "uuid": "5adb07b3-b729-4705-9057-91dfca0c8f5c", 00:17:10.910 "strip_size_kb": 64, 00:17:10.910 "state": "online", 00:17:10.910 "raid_level": "raid5f", 00:17:10.910 "superblock": true, 00:17:10.910 "num_base_bdevs": 3, 00:17:10.910 "num_base_bdevs_discovered": 3, 00:17:10.910 "num_base_bdevs_operational": 3, 00:17:10.910 "base_bdevs_list": [ 00:17:10.910 { 00:17:10.910 "name": "NewBaseBdev", 00:17:10.910 "uuid": "7f55404a-82c2-4e02-9c22-f03a3a564ea5", 00:17:10.910 "is_configured": true, 00:17:10.910 "data_offset": 2048, 00:17:10.910 "data_size": 63488 00:17:10.910 }, 00:17:10.910 { 00:17:10.910 "name": "BaseBdev2", 00:17:10.910 "uuid": "01909527-9b05-4ba4-beaa-dce542f8687c", 00:17:10.910 "is_configured": true, 00:17:10.910 "data_offset": 2048, 00:17:10.910 "data_size": 63488 00:17:10.910 }, 00:17:10.910 { 00:17:10.910 "name": "BaseBdev3", 00:17:10.910 "uuid": "168a07c0-6792-4f8f-9e90-cc34f8b9c2d0", 00:17:10.910 "is_configured": true, 00:17:10.910 "data_offset": 2048, 00:17:10.910 "data_size": 63488 00:17:10.910 } 00:17:10.910 ] 00:17:10.910 } 00:17:10.910 } 00:17:10.910 }' 00:17:10.910 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:10.910 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:10.910 BaseBdev2 00:17:10.910 BaseBdev3' 00:17:10.910 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.910 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:10.910 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:10.910 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:10.910 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.910 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.910 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.910 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.169 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:11.169 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:11.169 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:11.169 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:11.169 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.169 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.169 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:11.169 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.169 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:11.169 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:11.169 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:11.169 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:11.169 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.169 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.169 14:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:11.169 14:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.169 14:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:11.169 14:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:11.169 14:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:11.169 14:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.169 14:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.169 [2024-11-20 14:27:50.010704] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:11.169 [2024-11-20 14:27:50.010743] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:11.169 [2024-11-20 14:27:50.010858] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:11.169 [2024-11-20 14:27:50.011236] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:11.169 [2024-11-20 14:27:50.011270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:11.169 14:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.169 14:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80906 00:17:11.169 14:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80906 ']' 00:17:11.169 14:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80906 00:17:11.169 14:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:11.169 14:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:11.169 14:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80906 00:17:11.169 14:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:11.169 14:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:11.169 killing process with pid 80906 00:17:11.169 14:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80906' 00:17:11.169 14:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80906 00:17:11.169 [2024-11-20 14:27:50.051841] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:11.169 14:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80906 00:17:11.428 [2024-11-20 14:27:50.325379] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:12.806 14:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:12.806 00:17:12.806 real 0m11.814s 00:17:12.806 user 0m19.585s 00:17:12.806 sys 0m1.679s 00:17:12.806 14:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.806 14:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.806 ************************************ 00:17:12.807 END TEST raid5f_state_function_test_sb 00:17:12.807 ************************************ 00:17:12.807 14:27:51 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:17:12.807 14:27:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:12.807 14:27:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.807 14:27:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:12.807 ************************************ 00:17:12.807 START TEST raid5f_superblock_test 00:17:12.807 ************************************ 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81533 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81533 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81533 ']' 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:12.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:12.807 14:27:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.807 [2024-11-20 14:27:51.549489] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:17:12.807 [2024-11-20 14:27:51.549692] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81533 ] 00:17:12.807 [2024-11-20 14:27:51.733169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.065 [2024-11-20 14:27:51.862162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.324 [2024-11-20 14:27:52.066744] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.324 [2024-11-20 14:27:52.066827] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.582 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:13.582 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:17:13.582 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:13.582 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:13.582 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:13.582 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:13.583 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:13.583 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:13.583 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:13.583 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:13.583 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:13.583 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.583 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.583 malloc1 00:17:13.583 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.583 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:13.583 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.583 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.583 [2024-11-20 14:27:52.540723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:13.583 [2024-11-20 14:27:52.540814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.583 [2024-11-20 14:27:52.540847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:13.583 [2024-11-20 14:27:52.540863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.583 [2024-11-20 14:27:52.543701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.583 [2024-11-20 14:27:52.543769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:13.583 pt1 00:17:13.583 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.583 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:13.583 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:13.583 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:13.583 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:13.583 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:13.583 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:13.583 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:13.583 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:13.583 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:13.583 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.583 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.842 malloc2 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.842 [2024-11-20 14:27:52.596699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:13.842 [2024-11-20 14:27:52.596768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.842 [2024-11-20 14:27:52.596806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:13.842 [2024-11-20 14:27:52.596822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.842 [2024-11-20 14:27:52.600038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.842 [2024-11-20 14:27:52.600090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:13.842 pt2 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.842 malloc3 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.842 [2024-11-20 14:27:52.661003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:13.842 [2024-11-20 14:27:52.661199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.842 [2024-11-20 14:27:52.661246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:13.842 [2024-11-20 14:27:52.661264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.842 [2024-11-20 14:27:52.664054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.842 [2024-11-20 14:27:52.664103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:13.842 pt3 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.842 [2024-11-20 14:27:52.669063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:13.842 [2024-11-20 14:27:52.671515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:13.842 [2024-11-20 14:27:52.671618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:13.842 [2024-11-20 14:27:52.671868] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:13.842 [2024-11-20 14:27:52.671901] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:13.842 [2024-11-20 14:27:52.672241] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:13.842 [2024-11-20 14:27:52.677563] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:13.842 [2024-11-20 14:27:52.677724] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:13.842 [2024-11-20 14:27:52.678138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.842 "name": "raid_bdev1", 00:17:13.842 "uuid": "fc65ce42-a1a8-4027-9609-0145f9de54c7", 00:17:13.842 "strip_size_kb": 64, 00:17:13.842 "state": "online", 00:17:13.842 "raid_level": "raid5f", 00:17:13.842 "superblock": true, 00:17:13.842 "num_base_bdevs": 3, 00:17:13.842 "num_base_bdevs_discovered": 3, 00:17:13.842 "num_base_bdevs_operational": 3, 00:17:13.842 "base_bdevs_list": [ 00:17:13.842 { 00:17:13.842 "name": "pt1", 00:17:13.842 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:13.842 "is_configured": true, 00:17:13.842 "data_offset": 2048, 00:17:13.842 "data_size": 63488 00:17:13.842 }, 00:17:13.842 { 00:17:13.842 "name": "pt2", 00:17:13.842 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:13.842 "is_configured": true, 00:17:13.842 "data_offset": 2048, 00:17:13.842 "data_size": 63488 00:17:13.842 }, 00:17:13.842 { 00:17:13.842 "name": "pt3", 00:17:13.842 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:13.842 "is_configured": true, 00:17:13.842 "data_offset": 2048, 00:17:13.842 "data_size": 63488 00:17:13.842 } 00:17:13.842 ] 00:17:13.842 }' 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.842 14:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.410 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:14.410 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:14.410 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:14.410 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:14.410 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:14.410 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:14.410 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:14.410 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:14.410 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.410 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.410 [2024-11-20 14:27:53.204346] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:14.410 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.410 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:14.410 "name": "raid_bdev1", 00:17:14.410 "aliases": [ 00:17:14.410 "fc65ce42-a1a8-4027-9609-0145f9de54c7" 00:17:14.410 ], 00:17:14.410 "product_name": "Raid Volume", 00:17:14.410 "block_size": 512, 00:17:14.410 "num_blocks": 126976, 00:17:14.410 "uuid": "fc65ce42-a1a8-4027-9609-0145f9de54c7", 00:17:14.410 "assigned_rate_limits": { 00:17:14.410 "rw_ios_per_sec": 0, 00:17:14.410 "rw_mbytes_per_sec": 0, 00:17:14.410 "r_mbytes_per_sec": 0, 00:17:14.410 "w_mbytes_per_sec": 0 00:17:14.410 }, 00:17:14.410 "claimed": false, 00:17:14.410 "zoned": false, 00:17:14.410 "supported_io_types": { 00:17:14.410 "read": true, 00:17:14.410 "write": true, 00:17:14.410 "unmap": false, 00:17:14.410 "flush": false, 00:17:14.410 "reset": true, 00:17:14.410 "nvme_admin": false, 00:17:14.410 "nvme_io": false, 00:17:14.410 "nvme_io_md": false, 00:17:14.410 "write_zeroes": true, 00:17:14.410 "zcopy": false, 00:17:14.410 "get_zone_info": false, 00:17:14.410 "zone_management": false, 00:17:14.410 "zone_append": false, 00:17:14.410 "compare": false, 00:17:14.410 "compare_and_write": false, 00:17:14.410 "abort": false, 00:17:14.410 "seek_hole": false, 00:17:14.410 "seek_data": false, 00:17:14.410 "copy": false, 00:17:14.410 "nvme_iov_md": false 00:17:14.410 }, 00:17:14.410 "driver_specific": { 00:17:14.410 "raid": { 00:17:14.410 "uuid": "fc65ce42-a1a8-4027-9609-0145f9de54c7", 00:17:14.410 "strip_size_kb": 64, 00:17:14.410 "state": "online", 00:17:14.410 "raid_level": "raid5f", 00:17:14.410 "superblock": true, 00:17:14.410 "num_base_bdevs": 3, 00:17:14.410 "num_base_bdevs_discovered": 3, 00:17:14.410 "num_base_bdevs_operational": 3, 00:17:14.410 "base_bdevs_list": [ 00:17:14.410 { 00:17:14.410 "name": "pt1", 00:17:14.410 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:14.410 "is_configured": true, 00:17:14.410 "data_offset": 2048, 00:17:14.410 "data_size": 63488 00:17:14.410 }, 00:17:14.410 { 00:17:14.410 "name": "pt2", 00:17:14.410 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.410 "is_configured": true, 00:17:14.410 "data_offset": 2048, 00:17:14.410 "data_size": 63488 00:17:14.410 }, 00:17:14.410 { 00:17:14.410 "name": "pt3", 00:17:14.410 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:14.410 "is_configured": true, 00:17:14.410 "data_offset": 2048, 00:17:14.410 "data_size": 63488 00:17:14.410 } 00:17:14.410 ] 00:17:14.410 } 00:17:14.410 } 00:17:14.410 }' 00:17:14.410 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:14.410 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:14.410 pt2 00:17:14.410 pt3' 00:17:14.410 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.410 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:14.410 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:14.410 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:14.410 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.410 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.410 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.410 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.670 [2024-11-20 14:27:53.536386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fc65ce42-a1a8-4027-9609-0145f9de54c7 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fc65ce42-a1a8-4027-9609-0145f9de54c7 ']' 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.670 [2024-11-20 14:27:53.588149] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:14.670 [2024-11-20 14:27:53.588187] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:14.670 [2024-11-20 14:27:53.588286] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.670 [2024-11-20 14:27:53.588401] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:14.670 [2024-11-20 14:27:53.588417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.670 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.929 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.929 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:14.929 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:14.929 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.929 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.929 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.930 [2024-11-20 14:27:53.740231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:14.930 [2024-11-20 14:27:53.742883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:14.930 [2024-11-20 14:27:53.743126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:14.930 [2024-11-20 14:27:53.743253] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:14.930 [2024-11-20 14:27:53.743510] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:14.930 [2024-11-20 14:27:53.743702] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:14.930 [2024-11-20 14:27:53.743862] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:14.930 [2024-11-20 14:27:53.743907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:14.930 request: 00:17:14.930 { 00:17:14.930 "name": "raid_bdev1", 00:17:14.930 "raid_level": "raid5f", 00:17:14.930 "base_bdevs": [ 00:17:14.930 "malloc1", 00:17:14.930 "malloc2", 00:17:14.930 "malloc3" 00:17:14.930 ], 00:17:14.930 "strip_size_kb": 64, 00:17:14.930 "superblock": false, 00:17:14.930 "method": "bdev_raid_create", 00:17:14.930 "req_id": 1 00:17:14.930 } 00:17:14.930 Got JSON-RPC error response 00:17:14.930 response: 00:17:14.930 { 00:17:14.930 "code": -17, 00:17:14.930 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:14.930 } 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.930 [2024-11-20 14:27:53.808342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:14.930 [2024-11-20 14:27:53.808418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.930 [2024-11-20 14:27:53.808446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:14.930 [2024-11-20 14:27:53.808460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.930 [2024-11-20 14:27:53.811367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.930 [2024-11-20 14:27:53.811537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:14.930 [2024-11-20 14:27:53.811645] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:14.930 [2024-11-20 14:27:53.811712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:14.930 pt1 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.930 "name": "raid_bdev1", 00:17:14.930 "uuid": "fc65ce42-a1a8-4027-9609-0145f9de54c7", 00:17:14.930 "strip_size_kb": 64, 00:17:14.930 "state": "configuring", 00:17:14.930 "raid_level": "raid5f", 00:17:14.930 "superblock": true, 00:17:14.930 "num_base_bdevs": 3, 00:17:14.930 "num_base_bdevs_discovered": 1, 00:17:14.930 "num_base_bdevs_operational": 3, 00:17:14.930 "base_bdevs_list": [ 00:17:14.930 { 00:17:14.930 "name": "pt1", 00:17:14.930 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:14.930 "is_configured": true, 00:17:14.930 "data_offset": 2048, 00:17:14.930 "data_size": 63488 00:17:14.930 }, 00:17:14.930 { 00:17:14.930 "name": null, 00:17:14.930 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.930 "is_configured": false, 00:17:14.930 "data_offset": 2048, 00:17:14.930 "data_size": 63488 00:17:14.930 }, 00:17:14.930 { 00:17:14.930 "name": null, 00:17:14.930 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:14.930 "is_configured": false, 00:17:14.930 "data_offset": 2048, 00:17:14.930 "data_size": 63488 00:17:14.930 } 00:17:14.930 ] 00:17:14.930 }' 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.930 14:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.499 [2024-11-20 14:27:54.308498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:15.499 [2024-11-20 14:27:54.308578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.499 [2024-11-20 14:27:54.308613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:15.499 [2024-11-20 14:27:54.308628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.499 [2024-11-20 14:27:54.309190] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.499 [2024-11-20 14:27:54.309233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:15.499 [2024-11-20 14:27:54.309340] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:15.499 [2024-11-20 14:27:54.309382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:15.499 pt2 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.499 [2024-11-20 14:27:54.316481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.499 "name": "raid_bdev1", 00:17:15.499 "uuid": "fc65ce42-a1a8-4027-9609-0145f9de54c7", 00:17:15.499 "strip_size_kb": 64, 00:17:15.499 "state": "configuring", 00:17:15.499 "raid_level": "raid5f", 00:17:15.499 "superblock": true, 00:17:15.499 "num_base_bdevs": 3, 00:17:15.499 "num_base_bdevs_discovered": 1, 00:17:15.499 "num_base_bdevs_operational": 3, 00:17:15.499 "base_bdevs_list": [ 00:17:15.499 { 00:17:15.499 "name": "pt1", 00:17:15.499 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:15.499 "is_configured": true, 00:17:15.499 "data_offset": 2048, 00:17:15.499 "data_size": 63488 00:17:15.499 }, 00:17:15.499 { 00:17:15.499 "name": null, 00:17:15.499 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.499 "is_configured": false, 00:17:15.499 "data_offset": 0, 00:17:15.499 "data_size": 63488 00:17:15.499 }, 00:17:15.499 { 00:17:15.499 "name": null, 00:17:15.499 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:15.499 "is_configured": false, 00:17:15.499 "data_offset": 2048, 00:17:15.499 "data_size": 63488 00:17:15.499 } 00:17:15.499 ] 00:17:15.499 }' 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.499 14:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.067 [2024-11-20 14:27:54.824617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:16.067 [2024-11-20 14:27:54.824705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.067 [2024-11-20 14:27:54.824733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:16.067 [2024-11-20 14:27:54.824750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.067 [2024-11-20 14:27:54.825358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.067 [2024-11-20 14:27:54.825397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:16.067 [2024-11-20 14:27:54.825498] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:16.067 [2024-11-20 14:27:54.825535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:16.067 pt2 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.067 [2024-11-20 14:27:54.832585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:16.067 [2024-11-20 14:27:54.832783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.067 [2024-11-20 14:27:54.832815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:16.067 [2024-11-20 14:27:54.832841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.067 [2024-11-20 14:27:54.833301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.067 [2024-11-20 14:27:54.833337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:16.067 [2024-11-20 14:27:54.833415] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:16.067 [2024-11-20 14:27:54.833447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:16.067 [2024-11-20 14:27:54.833606] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:16.067 [2024-11-20 14:27:54.833630] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:16.067 [2024-11-20 14:27:54.833936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:16.067 [2024-11-20 14:27:54.838893] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:16.067 [2024-11-20 14:27:54.838920] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:16.067 [2024-11-20 14:27:54.839171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.067 pt3 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.067 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.067 "name": "raid_bdev1", 00:17:16.067 "uuid": "fc65ce42-a1a8-4027-9609-0145f9de54c7", 00:17:16.067 "strip_size_kb": 64, 00:17:16.067 "state": "online", 00:17:16.067 "raid_level": "raid5f", 00:17:16.067 "superblock": true, 00:17:16.067 "num_base_bdevs": 3, 00:17:16.067 "num_base_bdevs_discovered": 3, 00:17:16.067 "num_base_bdevs_operational": 3, 00:17:16.067 "base_bdevs_list": [ 00:17:16.067 { 00:17:16.067 "name": "pt1", 00:17:16.068 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:16.068 "is_configured": true, 00:17:16.068 "data_offset": 2048, 00:17:16.068 "data_size": 63488 00:17:16.068 }, 00:17:16.068 { 00:17:16.068 "name": "pt2", 00:17:16.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.068 "is_configured": true, 00:17:16.068 "data_offset": 2048, 00:17:16.068 "data_size": 63488 00:17:16.068 }, 00:17:16.068 { 00:17:16.068 "name": "pt3", 00:17:16.068 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:16.068 "is_configured": true, 00:17:16.068 "data_offset": 2048, 00:17:16.068 "data_size": 63488 00:17:16.068 } 00:17:16.068 ] 00:17:16.068 }' 00:17:16.068 14:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.068 14:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.635 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:16.635 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:16.635 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:16.635 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:16.635 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:16.635 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:16.635 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:16.635 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:16.635 14:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.635 14:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.635 [2024-11-20 14:27:55.361145] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:16.635 14:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.635 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:16.635 "name": "raid_bdev1", 00:17:16.635 "aliases": [ 00:17:16.635 "fc65ce42-a1a8-4027-9609-0145f9de54c7" 00:17:16.635 ], 00:17:16.635 "product_name": "Raid Volume", 00:17:16.635 "block_size": 512, 00:17:16.635 "num_blocks": 126976, 00:17:16.635 "uuid": "fc65ce42-a1a8-4027-9609-0145f9de54c7", 00:17:16.635 "assigned_rate_limits": { 00:17:16.635 "rw_ios_per_sec": 0, 00:17:16.635 "rw_mbytes_per_sec": 0, 00:17:16.635 "r_mbytes_per_sec": 0, 00:17:16.635 "w_mbytes_per_sec": 0 00:17:16.635 }, 00:17:16.635 "claimed": false, 00:17:16.635 "zoned": false, 00:17:16.635 "supported_io_types": { 00:17:16.635 "read": true, 00:17:16.635 "write": true, 00:17:16.635 "unmap": false, 00:17:16.635 "flush": false, 00:17:16.635 "reset": true, 00:17:16.635 "nvme_admin": false, 00:17:16.635 "nvme_io": false, 00:17:16.635 "nvme_io_md": false, 00:17:16.635 "write_zeroes": true, 00:17:16.635 "zcopy": false, 00:17:16.635 "get_zone_info": false, 00:17:16.635 "zone_management": false, 00:17:16.635 "zone_append": false, 00:17:16.635 "compare": false, 00:17:16.635 "compare_and_write": false, 00:17:16.635 "abort": false, 00:17:16.636 "seek_hole": false, 00:17:16.636 "seek_data": false, 00:17:16.636 "copy": false, 00:17:16.636 "nvme_iov_md": false 00:17:16.636 }, 00:17:16.636 "driver_specific": { 00:17:16.636 "raid": { 00:17:16.636 "uuid": "fc65ce42-a1a8-4027-9609-0145f9de54c7", 00:17:16.636 "strip_size_kb": 64, 00:17:16.636 "state": "online", 00:17:16.636 "raid_level": "raid5f", 00:17:16.636 "superblock": true, 00:17:16.636 "num_base_bdevs": 3, 00:17:16.636 "num_base_bdevs_discovered": 3, 00:17:16.636 "num_base_bdevs_operational": 3, 00:17:16.636 "base_bdevs_list": [ 00:17:16.636 { 00:17:16.636 "name": "pt1", 00:17:16.636 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:16.636 "is_configured": true, 00:17:16.636 "data_offset": 2048, 00:17:16.636 "data_size": 63488 00:17:16.636 }, 00:17:16.636 { 00:17:16.636 "name": "pt2", 00:17:16.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.636 "is_configured": true, 00:17:16.636 "data_offset": 2048, 00:17:16.636 "data_size": 63488 00:17:16.636 }, 00:17:16.636 { 00:17:16.636 "name": "pt3", 00:17:16.636 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:16.636 "is_configured": true, 00:17:16.636 "data_offset": 2048, 00:17:16.636 "data_size": 63488 00:17:16.636 } 00:17:16.636 ] 00:17:16.636 } 00:17:16.636 } 00:17:16.636 }' 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:16.636 pt2 00:17:16.636 pt3' 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.636 14:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.895 [2024-11-20 14:27:55.649164] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fc65ce42-a1a8-4027-9609-0145f9de54c7 '!=' fc65ce42-a1a8-4027-9609-0145f9de54c7 ']' 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.895 [2024-11-20 14:27:55.697001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.895 "name": "raid_bdev1", 00:17:16.895 "uuid": "fc65ce42-a1a8-4027-9609-0145f9de54c7", 00:17:16.895 "strip_size_kb": 64, 00:17:16.895 "state": "online", 00:17:16.895 "raid_level": "raid5f", 00:17:16.895 "superblock": true, 00:17:16.895 "num_base_bdevs": 3, 00:17:16.895 "num_base_bdevs_discovered": 2, 00:17:16.895 "num_base_bdevs_operational": 2, 00:17:16.895 "base_bdevs_list": [ 00:17:16.895 { 00:17:16.895 "name": null, 00:17:16.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.895 "is_configured": false, 00:17:16.895 "data_offset": 0, 00:17:16.895 "data_size": 63488 00:17:16.895 }, 00:17:16.895 { 00:17:16.895 "name": "pt2", 00:17:16.895 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.895 "is_configured": true, 00:17:16.895 "data_offset": 2048, 00:17:16.895 "data_size": 63488 00:17:16.895 }, 00:17:16.895 { 00:17:16.895 "name": "pt3", 00:17:16.895 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:16.895 "is_configured": true, 00:17:16.895 "data_offset": 2048, 00:17:16.895 "data_size": 63488 00:17:16.895 } 00:17:16.895 ] 00:17:16.895 }' 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.895 14:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.463 [2024-11-20 14:27:56.201128] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:17.463 [2024-11-20 14:27:56.201165] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:17.463 [2024-11-20 14:27:56.201261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.463 [2024-11-20 14:27:56.201336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:17.463 [2024-11-20 14:27:56.201359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:17.463 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:17.464 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:17.464 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.464 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.464 [2024-11-20 14:27:56.285069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:17.464 [2024-11-20 14:27:56.285272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.464 [2024-11-20 14:27:56.285309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:17.464 [2024-11-20 14:27:56.285327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.464 [2024-11-20 14:27:56.288207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.464 [2024-11-20 14:27:56.288260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:17.464 [2024-11-20 14:27:56.288358] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:17.464 [2024-11-20 14:27:56.288421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:17.464 pt2 00:17:17.464 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.464 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:17:17.464 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.464 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:17.464 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:17.464 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:17.464 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:17.464 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.464 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.464 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.464 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.464 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.464 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.464 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.464 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.464 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.464 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.464 "name": "raid_bdev1", 00:17:17.464 "uuid": "fc65ce42-a1a8-4027-9609-0145f9de54c7", 00:17:17.464 "strip_size_kb": 64, 00:17:17.464 "state": "configuring", 00:17:17.464 "raid_level": "raid5f", 00:17:17.464 "superblock": true, 00:17:17.464 "num_base_bdevs": 3, 00:17:17.464 "num_base_bdevs_discovered": 1, 00:17:17.464 "num_base_bdevs_operational": 2, 00:17:17.464 "base_bdevs_list": [ 00:17:17.464 { 00:17:17.464 "name": null, 00:17:17.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.464 "is_configured": false, 00:17:17.464 "data_offset": 2048, 00:17:17.464 "data_size": 63488 00:17:17.464 }, 00:17:17.464 { 00:17:17.464 "name": "pt2", 00:17:17.464 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.464 "is_configured": true, 00:17:17.464 "data_offset": 2048, 00:17:17.464 "data_size": 63488 00:17:17.464 }, 00:17:17.464 { 00:17:17.464 "name": null, 00:17:17.464 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:17.464 "is_configured": false, 00:17:17.464 "data_offset": 2048, 00:17:17.464 "data_size": 63488 00:17:17.464 } 00:17:17.464 ] 00:17:17.464 }' 00:17:17.464 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.464 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.044 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:18.044 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:18.044 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:17:18.044 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:18.045 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.045 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.045 [2024-11-20 14:27:56.769218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:18.045 [2024-11-20 14:27:56.769310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.045 [2024-11-20 14:27:56.769343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:18.045 [2024-11-20 14:27:56.769361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.045 [2024-11-20 14:27:56.769952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.045 [2024-11-20 14:27:56.770003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:18.045 [2024-11-20 14:27:56.770108] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:18.045 [2024-11-20 14:27:56.770167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:18.045 [2024-11-20 14:27:56.770325] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:18.045 [2024-11-20 14:27:56.770352] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:18.045 [2024-11-20 14:27:56.770670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:18.045 [2024-11-20 14:27:56.775563] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:18.045 [2024-11-20 14:27:56.775589] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:18.045 [2024-11-20 14:27:56.776035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.045 pt3 00:17:18.045 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.045 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:18.045 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.045 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.045 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.045 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.045 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:18.045 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.045 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.045 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.045 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.045 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.045 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.045 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.045 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.045 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.045 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.045 "name": "raid_bdev1", 00:17:18.045 "uuid": "fc65ce42-a1a8-4027-9609-0145f9de54c7", 00:17:18.045 "strip_size_kb": 64, 00:17:18.045 "state": "online", 00:17:18.045 "raid_level": "raid5f", 00:17:18.045 "superblock": true, 00:17:18.045 "num_base_bdevs": 3, 00:17:18.045 "num_base_bdevs_discovered": 2, 00:17:18.045 "num_base_bdevs_operational": 2, 00:17:18.045 "base_bdevs_list": [ 00:17:18.045 { 00:17:18.045 "name": null, 00:17:18.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.045 "is_configured": false, 00:17:18.045 "data_offset": 2048, 00:17:18.045 "data_size": 63488 00:17:18.045 }, 00:17:18.045 { 00:17:18.045 "name": "pt2", 00:17:18.045 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.045 "is_configured": true, 00:17:18.045 "data_offset": 2048, 00:17:18.045 "data_size": 63488 00:17:18.045 }, 00:17:18.045 { 00:17:18.045 "name": "pt3", 00:17:18.045 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:18.045 "is_configured": true, 00:17:18.045 "data_offset": 2048, 00:17:18.045 "data_size": 63488 00:17:18.045 } 00:17:18.045 ] 00:17:18.045 }' 00:17:18.045 14:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.045 14:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.613 [2024-11-20 14:27:57.297682] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:18.613 [2024-11-20 14:27:57.297860] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:18.613 [2024-11-20 14:27:57.298088] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:18.613 [2024-11-20 14:27:57.298289] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:18.613 [2024-11-20 14:27:57.298452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.613 [2024-11-20 14:27:57.361726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:18.613 [2024-11-20 14:27:57.361798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.613 [2024-11-20 14:27:57.361827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:18.613 [2024-11-20 14:27:57.361842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.613 [2024-11-20 14:27:57.364731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.613 [2024-11-20 14:27:57.364901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:18.613 [2024-11-20 14:27:57.365051] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:18.613 [2024-11-20 14:27:57.365117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:18.613 [2024-11-20 14:27:57.365301] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:18.613 [2024-11-20 14:27:57.365321] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:18.613 [2024-11-20 14:27:57.365343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:18.613 [2024-11-20 14:27:57.365408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:18.613 pt1 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.613 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.613 "name": "raid_bdev1", 00:17:18.613 "uuid": "fc65ce42-a1a8-4027-9609-0145f9de54c7", 00:17:18.613 "strip_size_kb": 64, 00:17:18.613 "state": "configuring", 00:17:18.613 "raid_level": "raid5f", 00:17:18.613 "superblock": true, 00:17:18.613 "num_base_bdevs": 3, 00:17:18.613 "num_base_bdevs_discovered": 1, 00:17:18.613 "num_base_bdevs_operational": 2, 00:17:18.613 "base_bdevs_list": [ 00:17:18.613 { 00:17:18.613 "name": null, 00:17:18.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.613 "is_configured": false, 00:17:18.613 "data_offset": 2048, 00:17:18.613 "data_size": 63488 00:17:18.613 }, 00:17:18.613 { 00:17:18.613 "name": "pt2", 00:17:18.613 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.613 "is_configured": true, 00:17:18.613 "data_offset": 2048, 00:17:18.613 "data_size": 63488 00:17:18.613 }, 00:17:18.614 { 00:17:18.614 "name": null, 00:17:18.614 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:18.614 "is_configured": false, 00:17:18.614 "data_offset": 2048, 00:17:18.614 "data_size": 63488 00:17:18.614 } 00:17:18.614 ] 00:17:18.614 }' 00:17:18.614 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.614 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.873 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:18.873 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:18.873 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.873 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.131 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.131 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:19.131 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:19.131 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.131 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.131 [2024-11-20 14:27:57.909884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:19.131 [2024-11-20 14:27:57.909963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.131 [2024-11-20 14:27:57.910018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:19.131 [2024-11-20 14:27:57.910037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.131 [2024-11-20 14:27:57.910633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.131 [2024-11-20 14:27:57.910668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:19.131 [2024-11-20 14:27:57.910770] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:19.131 [2024-11-20 14:27:57.910802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:19.131 [2024-11-20 14:27:57.910955] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:19.131 [2024-11-20 14:27:57.910971] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:19.131 [2024-11-20 14:27:57.911316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:19.131 [2024-11-20 14:27:57.916161] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:19.131 [2024-11-20 14:27:57.916212] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:19.131 [2024-11-20 14:27:57.916508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.131 pt3 00:17:19.131 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.131 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:19.131 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.131 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.131 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:19.131 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.131 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.131 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.131 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.131 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.131 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.131 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.131 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.131 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.131 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.131 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.131 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.131 "name": "raid_bdev1", 00:17:19.131 "uuid": "fc65ce42-a1a8-4027-9609-0145f9de54c7", 00:17:19.131 "strip_size_kb": 64, 00:17:19.131 "state": "online", 00:17:19.131 "raid_level": "raid5f", 00:17:19.131 "superblock": true, 00:17:19.131 "num_base_bdevs": 3, 00:17:19.131 "num_base_bdevs_discovered": 2, 00:17:19.131 "num_base_bdevs_operational": 2, 00:17:19.131 "base_bdevs_list": [ 00:17:19.131 { 00:17:19.131 "name": null, 00:17:19.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.131 "is_configured": false, 00:17:19.131 "data_offset": 2048, 00:17:19.131 "data_size": 63488 00:17:19.131 }, 00:17:19.131 { 00:17:19.131 "name": "pt2", 00:17:19.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.131 "is_configured": true, 00:17:19.131 "data_offset": 2048, 00:17:19.131 "data_size": 63488 00:17:19.131 }, 00:17:19.131 { 00:17:19.131 "name": "pt3", 00:17:19.131 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:19.131 "is_configured": true, 00:17:19.131 "data_offset": 2048, 00:17:19.131 "data_size": 63488 00:17:19.131 } 00:17:19.131 ] 00:17:19.131 }' 00:17:19.131 14:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.131 14:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.741 14:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:19.741 14:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:19.741 14:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.741 14:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.741 14:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.741 14:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:19.741 14:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:19.741 14:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:19.741 14:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.741 14:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.741 [2024-11-20 14:27:58.482473] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.741 14:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.741 14:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' fc65ce42-a1a8-4027-9609-0145f9de54c7 '!=' fc65ce42-a1a8-4027-9609-0145f9de54c7 ']' 00:17:19.741 14:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81533 00:17:19.741 14:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81533 ']' 00:17:19.741 14:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81533 00:17:19.741 14:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:19.741 14:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.741 14:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81533 00:17:19.741 killing process with pid 81533 00:17:19.741 14:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:19.741 14:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:19.741 14:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81533' 00:17:19.741 14:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81533 00:17:19.741 [2024-11-20 14:27:58.560125] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:19.741 14:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81533 00:17:19.741 [2024-11-20 14:27:58.560238] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.741 [2024-11-20 14:27:58.560316] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.741 [2024-11-20 14:27:58.560335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:20.003 [2024-11-20 14:27:58.831351] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:20.938 14:27:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:20.938 00:17:20.938 real 0m8.445s 00:17:20.938 user 0m13.687s 00:17:20.938 sys 0m1.269s 00:17:20.938 14:27:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:20.938 14:27:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.938 ************************************ 00:17:20.938 END TEST raid5f_superblock_test 00:17:20.938 ************************************ 00:17:21.196 14:27:59 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:21.196 14:27:59 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:17:21.196 14:27:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:21.196 14:27:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.196 14:27:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:21.196 ************************************ 00:17:21.196 START TEST raid5f_rebuild_test 00:17:21.196 ************************************ 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81985 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81985 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81985 ']' 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.196 14:27:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.197 14:27:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.197 14:27:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.197 14:27:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.197 [2024-11-20 14:28:00.033652] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:17:21.197 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:21.197 Zero copy mechanism will not be used. 00:17:21.197 [2024-11-20 14:28:00.034013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81985 ] 00:17:21.455 [2024-11-20 14:28:00.210702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.455 [2024-11-20 14:28:00.346762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.713 [2024-11-20 14:28:00.552718] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:21.713 [2024-11-20 14:28:00.553011] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.281 BaseBdev1_malloc 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.281 [2024-11-20 14:28:01.107095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:22.281 [2024-11-20 14:28:01.107177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.281 [2024-11-20 14:28:01.107211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:22.281 [2024-11-20 14:28:01.107231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.281 [2024-11-20 14:28:01.110059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.281 [2024-11-20 14:28:01.110114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:22.281 BaseBdev1 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.281 BaseBdev2_malloc 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.281 [2024-11-20 14:28:01.154911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:22.281 [2024-11-20 14:28:01.155008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.281 [2024-11-20 14:28:01.155044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:22.281 [2024-11-20 14:28:01.155063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.281 [2024-11-20 14:28:01.157828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.281 [2024-11-20 14:28:01.157880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:22.281 BaseBdev2 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.281 BaseBdev3_malloc 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.281 [2024-11-20 14:28:01.211189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:22.281 [2024-11-20 14:28:01.211260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.281 [2024-11-20 14:28:01.211294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:22.281 [2024-11-20 14:28:01.211329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.281 [2024-11-20 14:28:01.214092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.281 [2024-11-20 14:28:01.214145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:22.281 BaseBdev3 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.281 spare_malloc 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.281 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.540 spare_delay 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.540 [2024-11-20 14:28:01.267290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:22.540 [2024-11-20 14:28:01.267376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.540 [2024-11-20 14:28:01.267414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:22.540 [2024-11-20 14:28:01.267432] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.540 [2024-11-20 14:28:01.270267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.540 [2024-11-20 14:28:01.270322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:22.540 spare 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.540 [2024-11-20 14:28:01.275387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:22.540 [2024-11-20 14:28:01.277772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:22.540 [2024-11-20 14:28:01.277869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:22.540 [2024-11-20 14:28:01.278016] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:22.540 [2024-11-20 14:28:01.278037] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:22.540 [2024-11-20 14:28:01.278361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:22.540 [2024-11-20 14:28:01.283682] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:22.540 [2024-11-20 14:28:01.283837] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:22.540 [2024-11-20 14:28:01.284233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.540 "name": "raid_bdev1", 00:17:22.540 "uuid": "f985ad23-1997-4372-9d57-8f186301202d", 00:17:22.540 "strip_size_kb": 64, 00:17:22.540 "state": "online", 00:17:22.540 "raid_level": "raid5f", 00:17:22.540 "superblock": false, 00:17:22.540 "num_base_bdevs": 3, 00:17:22.540 "num_base_bdevs_discovered": 3, 00:17:22.540 "num_base_bdevs_operational": 3, 00:17:22.540 "base_bdevs_list": [ 00:17:22.540 { 00:17:22.540 "name": "BaseBdev1", 00:17:22.540 "uuid": "97df9994-0e72-58b7-9e2e-026aa9fe95a7", 00:17:22.540 "is_configured": true, 00:17:22.540 "data_offset": 0, 00:17:22.540 "data_size": 65536 00:17:22.540 }, 00:17:22.540 { 00:17:22.540 "name": "BaseBdev2", 00:17:22.540 "uuid": "21ed34a5-bf5d-5dcc-8582-abf22738b6b3", 00:17:22.540 "is_configured": true, 00:17:22.540 "data_offset": 0, 00:17:22.540 "data_size": 65536 00:17:22.540 }, 00:17:22.540 { 00:17:22.540 "name": "BaseBdev3", 00:17:22.540 "uuid": "66c292ce-d180-5c09-815e-03562ddf1397", 00:17:22.540 "is_configured": true, 00:17:22.540 "data_offset": 0, 00:17:22.540 "data_size": 65536 00:17:22.540 } 00:17:22.540 ] 00:17:22.540 }' 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.540 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.106 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:23.106 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.106 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.106 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:23.106 [2024-11-20 14:28:01.790353] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:23.106 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.106 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:17:23.106 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.106 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.106 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.106 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:23.106 14:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.106 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:23.106 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:23.106 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:23.106 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:23.106 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:23.106 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:23.106 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:23.106 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:23.106 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:23.106 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:23.107 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:23.107 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:23.107 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:23.107 14:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:23.365 [2024-11-20 14:28:02.202296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:23.365 /dev/nbd0 00:17:23.365 14:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:23.365 14:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:23.365 14:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:23.365 14:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:23.365 14:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:23.365 14:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:23.365 14:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:23.365 14:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:23.365 14:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:23.365 14:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:23.365 14:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.365 1+0 records in 00:17:23.365 1+0 records out 00:17:23.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296792 s, 13.8 MB/s 00:17:23.365 14:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.365 14:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:23.365 14:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.365 14:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:23.365 14:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:23.365 14:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:23.365 14:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:23.365 14:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:23.365 14:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:23.365 14:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:23.365 14:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:17:23.932 512+0 records in 00:17:23.932 512+0 records out 00:17:23.932 67108864 bytes (67 MB, 64 MiB) copied, 0.482845 s, 139 MB/s 00:17:23.932 14:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:23.932 14:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:23.932 14:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:23.932 14:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:23.932 14:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:23.932 14:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.932 14:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:24.190 [2024-11-20 14:28:03.055662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.190 [2024-11-20 14:28:03.069555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.190 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.190 "name": "raid_bdev1", 00:17:24.190 "uuid": "f985ad23-1997-4372-9d57-8f186301202d", 00:17:24.190 "strip_size_kb": 64, 00:17:24.190 "state": "online", 00:17:24.190 "raid_level": "raid5f", 00:17:24.190 "superblock": false, 00:17:24.190 "num_base_bdevs": 3, 00:17:24.190 "num_base_bdevs_discovered": 2, 00:17:24.191 "num_base_bdevs_operational": 2, 00:17:24.191 "base_bdevs_list": [ 00:17:24.191 { 00:17:24.191 "name": null, 00:17:24.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.191 "is_configured": false, 00:17:24.191 "data_offset": 0, 00:17:24.191 "data_size": 65536 00:17:24.191 }, 00:17:24.191 { 00:17:24.191 "name": "BaseBdev2", 00:17:24.191 "uuid": "21ed34a5-bf5d-5dcc-8582-abf22738b6b3", 00:17:24.191 "is_configured": true, 00:17:24.191 "data_offset": 0, 00:17:24.191 "data_size": 65536 00:17:24.191 }, 00:17:24.191 { 00:17:24.191 "name": "BaseBdev3", 00:17:24.191 "uuid": "66c292ce-d180-5c09-815e-03562ddf1397", 00:17:24.191 "is_configured": true, 00:17:24.191 "data_offset": 0, 00:17:24.191 "data_size": 65536 00:17:24.191 } 00:17:24.191 ] 00:17:24.191 }' 00:17:24.191 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.191 14:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.757 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:24.757 14:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.757 14:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.757 [2024-11-20 14:28:03.565724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:24.757 [2024-11-20 14:28:03.581237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:17:24.757 14:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.757 14:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:24.757 [2024-11-20 14:28:03.588748] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:25.693 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:25.693 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.693 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:25.693 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:25.693 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.693 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.693 14:28:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.693 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.693 14:28:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.693 14:28:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.693 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.693 "name": "raid_bdev1", 00:17:25.693 "uuid": "f985ad23-1997-4372-9d57-8f186301202d", 00:17:25.693 "strip_size_kb": 64, 00:17:25.693 "state": "online", 00:17:25.693 "raid_level": "raid5f", 00:17:25.693 "superblock": false, 00:17:25.693 "num_base_bdevs": 3, 00:17:25.693 "num_base_bdevs_discovered": 3, 00:17:25.693 "num_base_bdevs_operational": 3, 00:17:25.693 "process": { 00:17:25.693 "type": "rebuild", 00:17:25.693 "target": "spare", 00:17:25.693 "progress": { 00:17:25.693 "blocks": 18432, 00:17:25.694 "percent": 14 00:17:25.694 } 00:17:25.694 }, 00:17:25.694 "base_bdevs_list": [ 00:17:25.694 { 00:17:25.694 "name": "spare", 00:17:25.694 "uuid": "28d1d38e-bc41-52eb-8855-89a27c3aed73", 00:17:25.694 "is_configured": true, 00:17:25.694 "data_offset": 0, 00:17:25.694 "data_size": 65536 00:17:25.694 }, 00:17:25.694 { 00:17:25.694 "name": "BaseBdev2", 00:17:25.694 "uuid": "21ed34a5-bf5d-5dcc-8582-abf22738b6b3", 00:17:25.694 "is_configured": true, 00:17:25.694 "data_offset": 0, 00:17:25.694 "data_size": 65536 00:17:25.694 }, 00:17:25.694 { 00:17:25.694 "name": "BaseBdev3", 00:17:25.694 "uuid": "66c292ce-d180-5c09-815e-03562ddf1397", 00:17:25.694 "is_configured": true, 00:17:25.694 "data_offset": 0, 00:17:25.694 "data_size": 65536 00:17:25.694 } 00:17:25.694 ] 00:17:25.694 }' 00:17:25.694 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.952 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:25.952 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.952 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:25.952 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:25.952 14:28:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.952 14:28:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.952 [2024-11-20 14:28:04.758689] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:25.953 [2024-11-20 14:28:04.804150] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:25.953 [2024-11-20 14:28:04.804244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.953 [2024-11-20 14:28:04.804276] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:25.953 [2024-11-20 14:28:04.804288] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:25.953 14:28:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.953 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:25.953 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.953 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.953 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.953 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.953 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:25.953 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.953 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.953 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.953 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.953 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.953 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.953 14:28:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.953 14:28:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.953 14:28:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.953 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.953 "name": "raid_bdev1", 00:17:25.953 "uuid": "f985ad23-1997-4372-9d57-8f186301202d", 00:17:25.953 "strip_size_kb": 64, 00:17:25.953 "state": "online", 00:17:25.953 "raid_level": "raid5f", 00:17:25.953 "superblock": false, 00:17:25.953 "num_base_bdevs": 3, 00:17:25.953 "num_base_bdevs_discovered": 2, 00:17:25.953 "num_base_bdevs_operational": 2, 00:17:25.953 "base_bdevs_list": [ 00:17:25.953 { 00:17:25.953 "name": null, 00:17:25.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.953 "is_configured": false, 00:17:25.953 "data_offset": 0, 00:17:25.953 "data_size": 65536 00:17:25.953 }, 00:17:25.953 { 00:17:25.953 "name": "BaseBdev2", 00:17:25.953 "uuid": "21ed34a5-bf5d-5dcc-8582-abf22738b6b3", 00:17:25.953 "is_configured": true, 00:17:25.953 "data_offset": 0, 00:17:25.953 "data_size": 65536 00:17:25.953 }, 00:17:25.953 { 00:17:25.953 "name": "BaseBdev3", 00:17:25.953 "uuid": "66c292ce-d180-5c09-815e-03562ddf1397", 00:17:25.953 "is_configured": true, 00:17:25.953 "data_offset": 0, 00:17:25.953 "data_size": 65536 00:17:25.953 } 00:17:25.953 ] 00:17:25.953 }' 00:17:25.953 14:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.953 14:28:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.521 14:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:26.521 14:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.521 14:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:26.521 14:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:26.521 14:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.521 14:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.521 14:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.521 14:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.521 14:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.521 14:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.521 14:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.521 "name": "raid_bdev1", 00:17:26.521 "uuid": "f985ad23-1997-4372-9d57-8f186301202d", 00:17:26.521 "strip_size_kb": 64, 00:17:26.521 "state": "online", 00:17:26.521 "raid_level": "raid5f", 00:17:26.521 "superblock": false, 00:17:26.521 "num_base_bdevs": 3, 00:17:26.521 "num_base_bdevs_discovered": 2, 00:17:26.521 "num_base_bdevs_operational": 2, 00:17:26.521 "base_bdevs_list": [ 00:17:26.521 { 00:17:26.521 "name": null, 00:17:26.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.521 "is_configured": false, 00:17:26.521 "data_offset": 0, 00:17:26.521 "data_size": 65536 00:17:26.521 }, 00:17:26.521 { 00:17:26.521 "name": "BaseBdev2", 00:17:26.521 "uuid": "21ed34a5-bf5d-5dcc-8582-abf22738b6b3", 00:17:26.521 "is_configured": true, 00:17:26.521 "data_offset": 0, 00:17:26.521 "data_size": 65536 00:17:26.521 }, 00:17:26.521 { 00:17:26.521 "name": "BaseBdev3", 00:17:26.521 "uuid": "66c292ce-d180-5c09-815e-03562ddf1397", 00:17:26.521 "is_configured": true, 00:17:26.521 "data_offset": 0, 00:17:26.521 "data_size": 65536 00:17:26.521 } 00:17:26.521 ] 00:17:26.521 }' 00:17:26.521 14:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.521 14:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:26.521 14:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.779 14:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:26.779 14:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:26.779 14:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.779 14:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.779 [2024-11-20 14:28:05.547456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:26.779 [2024-11-20 14:28:05.562368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:26.779 14:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.779 14:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:26.779 [2024-11-20 14:28:05.569777] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:27.718 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.718 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.718 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.718 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.718 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.718 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.718 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.718 14:28:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.718 14:28:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.718 14:28:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.718 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.718 "name": "raid_bdev1", 00:17:27.718 "uuid": "f985ad23-1997-4372-9d57-8f186301202d", 00:17:27.718 "strip_size_kb": 64, 00:17:27.718 "state": "online", 00:17:27.718 "raid_level": "raid5f", 00:17:27.718 "superblock": false, 00:17:27.718 "num_base_bdevs": 3, 00:17:27.718 "num_base_bdevs_discovered": 3, 00:17:27.718 "num_base_bdevs_operational": 3, 00:17:27.718 "process": { 00:17:27.718 "type": "rebuild", 00:17:27.718 "target": "spare", 00:17:27.718 "progress": { 00:17:27.718 "blocks": 18432, 00:17:27.718 "percent": 14 00:17:27.718 } 00:17:27.718 }, 00:17:27.718 "base_bdevs_list": [ 00:17:27.718 { 00:17:27.718 "name": "spare", 00:17:27.718 "uuid": "28d1d38e-bc41-52eb-8855-89a27c3aed73", 00:17:27.718 "is_configured": true, 00:17:27.718 "data_offset": 0, 00:17:27.718 "data_size": 65536 00:17:27.718 }, 00:17:27.718 { 00:17:27.718 "name": "BaseBdev2", 00:17:27.718 "uuid": "21ed34a5-bf5d-5dcc-8582-abf22738b6b3", 00:17:27.718 "is_configured": true, 00:17:27.718 "data_offset": 0, 00:17:27.718 "data_size": 65536 00:17:27.718 }, 00:17:27.718 { 00:17:27.718 "name": "BaseBdev3", 00:17:27.718 "uuid": "66c292ce-d180-5c09-815e-03562ddf1397", 00:17:27.718 "is_configured": true, 00:17:27.718 "data_offset": 0, 00:17:27.718 "data_size": 65536 00:17:27.718 } 00:17:27.718 ] 00:17:27.718 }' 00:17:27.718 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.718 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.718 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.987 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.987 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:27.987 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:27.987 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:27.987 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=593 00:17:27.987 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:27.987 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.987 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.987 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.987 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.987 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.987 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.987 14:28:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.987 14:28:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.987 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.987 14:28:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.987 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.987 "name": "raid_bdev1", 00:17:27.987 "uuid": "f985ad23-1997-4372-9d57-8f186301202d", 00:17:27.987 "strip_size_kb": 64, 00:17:27.987 "state": "online", 00:17:27.987 "raid_level": "raid5f", 00:17:27.987 "superblock": false, 00:17:27.987 "num_base_bdevs": 3, 00:17:27.987 "num_base_bdevs_discovered": 3, 00:17:27.987 "num_base_bdevs_operational": 3, 00:17:27.987 "process": { 00:17:27.987 "type": "rebuild", 00:17:27.987 "target": "spare", 00:17:27.987 "progress": { 00:17:27.987 "blocks": 22528, 00:17:27.987 "percent": 17 00:17:27.987 } 00:17:27.987 }, 00:17:27.987 "base_bdevs_list": [ 00:17:27.987 { 00:17:27.987 "name": "spare", 00:17:27.987 "uuid": "28d1d38e-bc41-52eb-8855-89a27c3aed73", 00:17:27.987 "is_configured": true, 00:17:27.987 "data_offset": 0, 00:17:27.987 "data_size": 65536 00:17:27.987 }, 00:17:27.987 { 00:17:27.987 "name": "BaseBdev2", 00:17:27.987 "uuid": "21ed34a5-bf5d-5dcc-8582-abf22738b6b3", 00:17:27.987 "is_configured": true, 00:17:27.987 "data_offset": 0, 00:17:27.987 "data_size": 65536 00:17:27.987 }, 00:17:27.987 { 00:17:27.987 "name": "BaseBdev3", 00:17:27.987 "uuid": "66c292ce-d180-5c09-815e-03562ddf1397", 00:17:27.987 "is_configured": true, 00:17:27.987 "data_offset": 0, 00:17:27.987 "data_size": 65536 00:17:27.987 } 00:17:27.987 ] 00:17:27.987 }' 00:17:27.987 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.987 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.987 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.987 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.987 14:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:28.922 14:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:28.922 14:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.922 14:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.922 14:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.922 14:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.922 14:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.922 14:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.922 14:28:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.922 14:28:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.922 14:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.922 14:28:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.180 14:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.180 "name": "raid_bdev1", 00:17:29.180 "uuid": "f985ad23-1997-4372-9d57-8f186301202d", 00:17:29.180 "strip_size_kb": 64, 00:17:29.180 "state": "online", 00:17:29.180 "raid_level": "raid5f", 00:17:29.180 "superblock": false, 00:17:29.180 "num_base_bdevs": 3, 00:17:29.180 "num_base_bdevs_discovered": 3, 00:17:29.180 "num_base_bdevs_operational": 3, 00:17:29.180 "process": { 00:17:29.180 "type": "rebuild", 00:17:29.180 "target": "spare", 00:17:29.180 "progress": { 00:17:29.180 "blocks": 45056, 00:17:29.180 "percent": 34 00:17:29.180 } 00:17:29.180 }, 00:17:29.180 "base_bdevs_list": [ 00:17:29.180 { 00:17:29.180 "name": "spare", 00:17:29.180 "uuid": "28d1d38e-bc41-52eb-8855-89a27c3aed73", 00:17:29.180 "is_configured": true, 00:17:29.180 "data_offset": 0, 00:17:29.180 "data_size": 65536 00:17:29.180 }, 00:17:29.180 { 00:17:29.180 "name": "BaseBdev2", 00:17:29.180 "uuid": "21ed34a5-bf5d-5dcc-8582-abf22738b6b3", 00:17:29.180 "is_configured": true, 00:17:29.180 "data_offset": 0, 00:17:29.180 "data_size": 65536 00:17:29.180 }, 00:17:29.180 { 00:17:29.180 "name": "BaseBdev3", 00:17:29.180 "uuid": "66c292ce-d180-5c09-815e-03562ddf1397", 00:17:29.180 "is_configured": true, 00:17:29.180 "data_offset": 0, 00:17:29.180 "data_size": 65536 00:17:29.181 } 00:17:29.181 ] 00:17:29.181 }' 00:17:29.181 14:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.181 14:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:29.181 14:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.181 14:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:29.181 14:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:30.117 14:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:30.117 14:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.117 14:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.117 14:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.117 14:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.117 14:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.117 14:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.117 14:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.117 14:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.117 14:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.117 14:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.117 14:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.117 "name": "raid_bdev1", 00:17:30.117 "uuid": "f985ad23-1997-4372-9d57-8f186301202d", 00:17:30.117 "strip_size_kb": 64, 00:17:30.117 "state": "online", 00:17:30.117 "raid_level": "raid5f", 00:17:30.117 "superblock": false, 00:17:30.117 "num_base_bdevs": 3, 00:17:30.117 "num_base_bdevs_discovered": 3, 00:17:30.117 "num_base_bdevs_operational": 3, 00:17:30.117 "process": { 00:17:30.117 "type": "rebuild", 00:17:30.117 "target": "spare", 00:17:30.117 "progress": { 00:17:30.117 "blocks": 69632, 00:17:30.117 "percent": 53 00:17:30.117 } 00:17:30.117 }, 00:17:30.117 "base_bdevs_list": [ 00:17:30.117 { 00:17:30.117 "name": "spare", 00:17:30.117 "uuid": "28d1d38e-bc41-52eb-8855-89a27c3aed73", 00:17:30.117 "is_configured": true, 00:17:30.117 "data_offset": 0, 00:17:30.117 "data_size": 65536 00:17:30.117 }, 00:17:30.117 { 00:17:30.117 "name": "BaseBdev2", 00:17:30.117 "uuid": "21ed34a5-bf5d-5dcc-8582-abf22738b6b3", 00:17:30.117 "is_configured": true, 00:17:30.117 "data_offset": 0, 00:17:30.117 "data_size": 65536 00:17:30.117 }, 00:17:30.117 { 00:17:30.117 "name": "BaseBdev3", 00:17:30.117 "uuid": "66c292ce-d180-5c09-815e-03562ddf1397", 00:17:30.117 "is_configured": true, 00:17:30.117 "data_offset": 0, 00:17:30.117 "data_size": 65536 00:17:30.117 } 00:17:30.117 ] 00:17:30.117 }' 00:17:30.117 14:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.374 14:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.374 14:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.374 14:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.374 14:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:31.310 14:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:31.310 14:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:31.310 14:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.310 14:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:31.310 14:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:31.310 14:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.310 14:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.310 14:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.310 14:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.310 14:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.310 14:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.310 14:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.310 "name": "raid_bdev1", 00:17:31.310 "uuid": "f985ad23-1997-4372-9d57-8f186301202d", 00:17:31.310 "strip_size_kb": 64, 00:17:31.310 "state": "online", 00:17:31.310 "raid_level": "raid5f", 00:17:31.310 "superblock": false, 00:17:31.310 "num_base_bdevs": 3, 00:17:31.310 "num_base_bdevs_discovered": 3, 00:17:31.310 "num_base_bdevs_operational": 3, 00:17:31.310 "process": { 00:17:31.310 "type": "rebuild", 00:17:31.310 "target": "spare", 00:17:31.310 "progress": { 00:17:31.310 "blocks": 92160, 00:17:31.310 "percent": 70 00:17:31.310 } 00:17:31.310 }, 00:17:31.310 "base_bdevs_list": [ 00:17:31.310 { 00:17:31.310 "name": "spare", 00:17:31.310 "uuid": "28d1d38e-bc41-52eb-8855-89a27c3aed73", 00:17:31.310 "is_configured": true, 00:17:31.310 "data_offset": 0, 00:17:31.310 "data_size": 65536 00:17:31.310 }, 00:17:31.310 { 00:17:31.310 "name": "BaseBdev2", 00:17:31.310 "uuid": "21ed34a5-bf5d-5dcc-8582-abf22738b6b3", 00:17:31.310 "is_configured": true, 00:17:31.310 "data_offset": 0, 00:17:31.310 "data_size": 65536 00:17:31.310 }, 00:17:31.310 { 00:17:31.310 "name": "BaseBdev3", 00:17:31.310 "uuid": "66c292ce-d180-5c09-815e-03562ddf1397", 00:17:31.310 "is_configured": true, 00:17:31.310 "data_offset": 0, 00:17:31.310 "data_size": 65536 00:17:31.310 } 00:17:31.310 ] 00:17:31.310 }' 00:17:31.310 14:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.310 14:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:31.310 14:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.569 14:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:31.569 14:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:32.509 14:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:32.509 14:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:32.509 14:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.509 14:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:32.509 14:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:32.509 14:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.509 14:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.509 14:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.509 14:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.509 14:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.509 14:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.509 14:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.509 "name": "raid_bdev1", 00:17:32.509 "uuid": "f985ad23-1997-4372-9d57-8f186301202d", 00:17:32.509 "strip_size_kb": 64, 00:17:32.509 "state": "online", 00:17:32.509 "raid_level": "raid5f", 00:17:32.509 "superblock": false, 00:17:32.509 "num_base_bdevs": 3, 00:17:32.509 "num_base_bdevs_discovered": 3, 00:17:32.509 "num_base_bdevs_operational": 3, 00:17:32.509 "process": { 00:17:32.509 "type": "rebuild", 00:17:32.509 "target": "spare", 00:17:32.509 "progress": { 00:17:32.509 "blocks": 116736, 00:17:32.509 "percent": 89 00:17:32.509 } 00:17:32.509 }, 00:17:32.509 "base_bdevs_list": [ 00:17:32.509 { 00:17:32.509 "name": "spare", 00:17:32.509 "uuid": "28d1d38e-bc41-52eb-8855-89a27c3aed73", 00:17:32.509 "is_configured": true, 00:17:32.509 "data_offset": 0, 00:17:32.509 "data_size": 65536 00:17:32.509 }, 00:17:32.509 { 00:17:32.509 "name": "BaseBdev2", 00:17:32.509 "uuid": "21ed34a5-bf5d-5dcc-8582-abf22738b6b3", 00:17:32.509 "is_configured": true, 00:17:32.509 "data_offset": 0, 00:17:32.509 "data_size": 65536 00:17:32.509 }, 00:17:32.509 { 00:17:32.509 "name": "BaseBdev3", 00:17:32.509 "uuid": "66c292ce-d180-5c09-815e-03562ddf1397", 00:17:32.509 "is_configured": true, 00:17:32.509 "data_offset": 0, 00:17:32.509 "data_size": 65536 00:17:32.509 } 00:17:32.509 ] 00:17:32.509 }' 00:17:32.509 14:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.509 14:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:32.509 14:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.768 14:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:32.768 14:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:33.334 [2024-11-20 14:28:12.046722] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:33.334 [2024-11-20 14:28:12.047071] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:33.334 [2024-11-20 14:28:12.047142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.593 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:33.593 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:33.593 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.593 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:33.593 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:33.593 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.593 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.593 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.593 14:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.593 14:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.593 14:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.593 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.593 "name": "raid_bdev1", 00:17:33.593 "uuid": "f985ad23-1997-4372-9d57-8f186301202d", 00:17:33.593 "strip_size_kb": 64, 00:17:33.593 "state": "online", 00:17:33.593 "raid_level": "raid5f", 00:17:33.593 "superblock": false, 00:17:33.593 "num_base_bdevs": 3, 00:17:33.593 "num_base_bdevs_discovered": 3, 00:17:33.593 "num_base_bdevs_operational": 3, 00:17:33.593 "base_bdevs_list": [ 00:17:33.593 { 00:17:33.593 "name": "spare", 00:17:33.593 "uuid": "28d1d38e-bc41-52eb-8855-89a27c3aed73", 00:17:33.593 "is_configured": true, 00:17:33.593 "data_offset": 0, 00:17:33.593 "data_size": 65536 00:17:33.593 }, 00:17:33.593 { 00:17:33.593 "name": "BaseBdev2", 00:17:33.593 "uuid": "21ed34a5-bf5d-5dcc-8582-abf22738b6b3", 00:17:33.593 "is_configured": true, 00:17:33.593 "data_offset": 0, 00:17:33.593 "data_size": 65536 00:17:33.593 }, 00:17:33.593 { 00:17:33.593 "name": "BaseBdev3", 00:17:33.593 "uuid": "66c292ce-d180-5c09-815e-03562ddf1397", 00:17:33.593 "is_configured": true, 00:17:33.593 "data_offset": 0, 00:17:33.593 "data_size": 65536 00:17:33.593 } 00:17:33.593 ] 00:17:33.593 }' 00:17:33.593 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.853 "name": "raid_bdev1", 00:17:33.853 "uuid": "f985ad23-1997-4372-9d57-8f186301202d", 00:17:33.853 "strip_size_kb": 64, 00:17:33.853 "state": "online", 00:17:33.853 "raid_level": "raid5f", 00:17:33.853 "superblock": false, 00:17:33.853 "num_base_bdevs": 3, 00:17:33.853 "num_base_bdevs_discovered": 3, 00:17:33.853 "num_base_bdevs_operational": 3, 00:17:33.853 "base_bdevs_list": [ 00:17:33.853 { 00:17:33.853 "name": "spare", 00:17:33.853 "uuid": "28d1d38e-bc41-52eb-8855-89a27c3aed73", 00:17:33.853 "is_configured": true, 00:17:33.853 "data_offset": 0, 00:17:33.853 "data_size": 65536 00:17:33.853 }, 00:17:33.853 { 00:17:33.853 "name": "BaseBdev2", 00:17:33.853 "uuid": "21ed34a5-bf5d-5dcc-8582-abf22738b6b3", 00:17:33.853 "is_configured": true, 00:17:33.853 "data_offset": 0, 00:17:33.853 "data_size": 65536 00:17:33.853 }, 00:17:33.853 { 00:17:33.853 "name": "BaseBdev3", 00:17:33.853 "uuid": "66c292ce-d180-5c09-815e-03562ddf1397", 00:17:33.853 "is_configured": true, 00:17:33.853 "data_offset": 0, 00:17:33.853 "data_size": 65536 00:17:33.853 } 00:17:33.853 ] 00:17:33.853 }' 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.853 14:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.113 14:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.113 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.113 "name": "raid_bdev1", 00:17:34.113 "uuid": "f985ad23-1997-4372-9d57-8f186301202d", 00:17:34.113 "strip_size_kb": 64, 00:17:34.113 "state": "online", 00:17:34.113 "raid_level": "raid5f", 00:17:34.113 "superblock": false, 00:17:34.113 "num_base_bdevs": 3, 00:17:34.113 "num_base_bdevs_discovered": 3, 00:17:34.113 "num_base_bdevs_operational": 3, 00:17:34.113 "base_bdevs_list": [ 00:17:34.113 { 00:17:34.113 "name": "spare", 00:17:34.113 "uuid": "28d1d38e-bc41-52eb-8855-89a27c3aed73", 00:17:34.113 "is_configured": true, 00:17:34.113 "data_offset": 0, 00:17:34.113 "data_size": 65536 00:17:34.113 }, 00:17:34.113 { 00:17:34.113 "name": "BaseBdev2", 00:17:34.113 "uuid": "21ed34a5-bf5d-5dcc-8582-abf22738b6b3", 00:17:34.113 "is_configured": true, 00:17:34.113 "data_offset": 0, 00:17:34.113 "data_size": 65536 00:17:34.113 }, 00:17:34.113 { 00:17:34.113 "name": "BaseBdev3", 00:17:34.113 "uuid": "66c292ce-d180-5c09-815e-03562ddf1397", 00:17:34.113 "is_configured": true, 00:17:34.113 "data_offset": 0, 00:17:34.113 "data_size": 65536 00:17:34.113 } 00:17:34.113 ] 00:17:34.113 }' 00:17:34.113 14:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.113 14:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.371 14:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:34.371 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.371 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.371 [2024-11-20 14:28:13.329970] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:34.371 [2024-11-20 14:28:13.330168] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:34.371 [2024-11-20 14:28:13.330302] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.371 [2024-11-20 14:28:13.330410] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:34.371 [2024-11-20 14:28:13.330436] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:34.371 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.371 14:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.371 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.371 14:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:34.371 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.371 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.629 14:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:34.629 14:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:34.629 14:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:34.629 14:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:34.629 14:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:34.629 14:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:34.629 14:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:34.629 14:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:34.629 14:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:34.629 14:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:34.629 14:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:34.629 14:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:34.629 14:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:34.888 /dev/nbd0 00:17:34.888 14:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:34.888 14:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:34.888 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:34.888 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:34.888 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:34.888 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:34.888 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:34.888 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:34.888 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:34.888 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:34.888 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:34.888 1+0 records in 00:17:34.888 1+0 records out 00:17:34.888 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565244 s, 7.2 MB/s 00:17:34.888 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.888 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:34.888 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.888 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:34.888 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:34.888 14:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:34.888 14:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:34.888 14:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:35.147 /dev/nbd1 00:17:35.147 14:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:35.147 14:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:35.147 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:35.147 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:35.147 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:35.147 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:35.147 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:35.147 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:35.147 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:35.147 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:35.147 14:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:35.147 1+0 records in 00:17:35.147 1+0 records out 00:17:35.147 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003978 s, 10.3 MB/s 00:17:35.147 14:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:35.147 14:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:35.147 14:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:35.147 14:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:35.147 14:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:35.147 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:35.147 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:35.147 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:35.405 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:35.405 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:35.405 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:35.405 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:35.405 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:35.405 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:35.405 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:35.664 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:35.664 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:35.664 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:35.664 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:35.664 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:35.664 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:35.664 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:35.664 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:35.664 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:35.664 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:35.923 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:35.923 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:35.923 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:35.923 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:35.923 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:35.923 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:35.923 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:35.923 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:35.923 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:35.923 14:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81985 00:17:35.923 14:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81985 ']' 00:17:35.923 14:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81985 00:17:35.923 14:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:35.923 14:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:35.923 14:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81985 00:17:35.923 14:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:35.923 killing process with pid 81985 00:17:35.923 14:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:35.923 14:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81985' 00:17:35.923 14:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81985 00:17:35.923 Received shutdown signal, test time was about 60.000000 seconds 00:17:35.923 00:17:35.923 Latency(us) 00:17:35.923 [2024-11-20T14:28:14.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.923 [2024-11-20T14:28:14.905Z] =================================================================================================================== 00:17:35.923 [2024-11-20T14:28:14.905Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:35.923 [2024-11-20 14:28:14.739651] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:35.923 14:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81985 00:17:36.188 [2024-11-20 14:28:15.084348] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:37.578 00:17:37.578 real 0m16.195s 00:17:37.578 user 0m20.717s 00:17:37.578 sys 0m1.974s 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.578 ************************************ 00:17:37.578 END TEST raid5f_rebuild_test 00:17:37.578 ************************************ 00:17:37.578 14:28:16 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:17:37.578 14:28:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:37.578 14:28:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.578 14:28:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:37.578 ************************************ 00:17:37.578 START TEST raid5f_rebuild_test_sb 00:17:37.578 ************************************ 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82436 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82436 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82436 ']' 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.578 14:28:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.578 [2024-11-20 14:28:16.289229] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:17:37.578 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:37.578 Zero copy mechanism will not be used. 00:17:37.578 [2024-11-20 14:28:16.289417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82436 ] 00:17:37.578 [2024-11-20 14:28:16.473628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.840 [2024-11-20 14:28:16.603062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.840 [2024-11-20 14:28:16.806363] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:37.840 [2024-11-20 14:28:16.806456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:38.407 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.407 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:38.407 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:38.407 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:38.407 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.407 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.407 BaseBdev1_malloc 00:17:38.407 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.407 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:38.407 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.407 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.407 [2024-11-20 14:28:17.362626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:38.407 [2024-11-20 14:28:17.362703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.407 [2024-11-20 14:28:17.362731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:38.407 [2024-11-20 14:28:17.362749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.407 [2024-11-20 14:28:17.365477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.407 [2024-11-20 14:28:17.365534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:38.407 BaseBdev1 00:17:38.407 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.407 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:38.407 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:38.407 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.407 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.666 BaseBdev2_malloc 00:17:38.666 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.666 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:38.666 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.666 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.666 [2024-11-20 14:28:17.414407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:38.666 [2024-11-20 14:28:17.414491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.666 [2024-11-20 14:28:17.414523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:38.666 [2024-11-20 14:28:17.414541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.666 [2024-11-20 14:28:17.417335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.666 [2024-11-20 14:28:17.417386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:38.666 BaseBdev2 00:17:38.666 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.666 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.667 BaseBdev3_malloc 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.667 [2024-11-20 14:28:17.476401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:38.667 [2024-11-20 14:28:17.476478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.667 [2024-11-20 14:28:17.476509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:38.667 [2024-11-20 14:28:17.476528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.667 [2024-11-20 14:28:17.479279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.667 [2024-11-20 14:28:17.479340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:38.667 BaseBdev3 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.667 spare_malloc 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.667 spare_delay 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.667 [2024-11-20 14:28:17.544113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:38.667 [2024-11-20 14:28:17.544188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.667 [2024-11-20 14:28:17.544219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:38.667 [2024-11-20 14:28:17.544237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.667 [2024-11-20 14:28:17.547059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.667 [2024-11-20 14:28:17.547110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:38.667 spare 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.667 [2024-11-20 14:28:17.552213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:38.667 [2024-11-20 14:28:17.554572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:38.667 [2024-11-20 14:28:17.554674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:38.667 [2024-11-20 14:28:17.554926] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:38.667 [2024-11-20 14:28:17.554956] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:38.667 [2024-11-20 14:28:17.555328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:38.667 [2024-11-20 14:28:17.560489] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:38.667 [2024-11-20 14:28:17.560531] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:38.667 [2024-11-20 14:28:17.560765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.667 "name": "raid_bdev1", 00:17:38.667 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:38.667 "strip_size_kb": 64, 00:17:38.667 "state": "online", 00:17:38.667 "raid_level": "raid5f", 00:17:38.667 "superblock": true, 00:17:38.667 "num_base_bdevs": 3, 00:17:38.667 "num_base_bdevs_discovered": 3, 00:17:38.667 "num_base_bdevs_operational": 3, 00:17:38.667 "base_bdevs_list": [ 00:17:38.667 { 00:17:38.667 "name": "BaseBdev1", 00:17:38.667 "uuid": "16034b8b-f63e-50c1-9e3a-4055074a36f6", 00:17:38.667 "is_configured": true, 00:17:38.667 "data_offset": 2048, 00:17:38.667 "data_size": 63488 00:17:38.667 }, 00:17:38.667 { 00:17:38.667 "name": "BaseBdev2", 00:17:38.667 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:38.667 "is_configured": true, 00:17:38.667 "data_offset": 2048, 00:17:38.667 "data_size": 63488 00:17:38.667 }, 00:17:38.667 { 00:17:38.667 "name": "BaseBdev3", 00:17:38.667 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:38.667 "is_configured": true, 00:17:38.667 "data_offset": 2048, 00:17:38.667 "data_size": 63488 00:17:38.667 } 00:17:38.667 ] 00:17:38.667 }' 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.667 14:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.235 [2024-11-20 14:28:18.106764] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:39.235 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:39.494 [2024-11-20 14:28:18.466708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:39.753 /dev/nbd0 00:17:39.753 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:39.753 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:39.753 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:39.753 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:39.753 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:39.753 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:39.753 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:39.753 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:39.753 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:39.753 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:39.753 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:39.753 1+0 records in 00:17:39.753 1+0 records out 00:17:39.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370522 s, 11.1 MB/s 00:17:39.753 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:39.753 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:39.753 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:39.753 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:39.753 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:39.753 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:39.753 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:39.753 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:39.753 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:39.753 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:39.753 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:17:40.012 496+0 records in 00:17:40.012 496+0 records out 00:17:40.012 65011712 bytes (65 MB, 62 MiB) copied, 0.451975 s, 144 MB/s 00:17:40.012 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:40.012 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:40.012 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:40.012 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:40.012 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:40.012 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:40.012 14:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:40.271 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:40.271 [2024-11-20 14:28:19.249365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.530 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:40.530 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:40.530 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:40.530 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:40.530 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:40.530 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:40.530 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:40.530 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:40.530 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.530 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.530 [2024-11-20 14:28:19.263162] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:40.530 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.530 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:40.530 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.531 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.531 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.531 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.531 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:40.531 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.531 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.531 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.531 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.531 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.531 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.531 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.531 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.531 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.531 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.531 "name": "raid_bdev1", 00:17:40.531 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:40.531 "strip_size_kb": 64, 00:17:40.531 "state": "online", 00:17:40.531 "raid_level": "raid5f", 00:17:40.531 "superblock": true, 00:17:40.531 "num_base_bdevs": 3, 00:17:40.531 "num_base_bdevs_discovered": 2, 00:17:40.531 "num_base_bdevs_operational": 2, 00:17:40.531 "base_bdevs_list": [ 00:17:40.531 { 00:17:40.531 "name": null, 00:17:40.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.531 "is_configured": false, 00:17:40.531 "data_offset": 0, 00:17:40.531 "data_size": 63488 00:17:40.531 }, 00:17:40.531 { 00:17:40.531 "name": "BaseBdev2", 00:17:40.531 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:40.531 "is_configured": true, 00:17:40.531 "data_offset": 2048, 00:17:40.531 "data_size": 63488 00:17:40.531 }, 00:17:40.531 { 00:17:40.531 "name": "BaseBdev3", 00:17:40.531 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:40.531 "is_configured": true, 00:17:40.531 "data_offset": 2048, 00:17:40.531 "data_size": 63488 00:17:40.531 } 00:17:40.531 ] 00:17:40.531 }' 00:17:40.531 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.531 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.790 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:40.790 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.790 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.790 [2024-11-20 14:28:19.747410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:40.790 [2024-11-20 14:28:19.762997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:17:40.790 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.790 14:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:40.790 [2024-11-20 14:28:19.770454] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:42.189 14:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:42.189 14:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.189 14:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:42.189 14:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:42.189 14:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.189 14:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.189 14:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.189 14:28:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.189 14:28:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.189 14:28:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.189 14:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.189 "name": "raid_bdev1", 00:17:42.189 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:42.189 "strip_size_kb": 64, 00:17:42.189 "state": "online", 00:17:42.189 "raid_level": "raid5f", 00:17:42.189 "superblock": true, 00:17:42.189 "num_base_bdevs": 3, 00:17:42.189 "num_base_bdevs_discovered": 3, 00:17:42.189 "num_base_bdevs_operational": 3, 00:17:42.189 "process": { 00:17:42.189 "type": "rebuild", 00:17:42.189 "target": "spare", 00:17:42.189 "progress": { 00:17:42.189 "blocks": 18432, 00:17:42.189 "percent": 14 00:17:42.189 } 00:17:42.189 }, 00:17:42.189 "base_bdevs_list": [ 00:17:42.189 { 00:17:42.189 "name": "spare", 00:17:42.189 "uuid": "abbb22ca-6a32-5afb-8dcd-36f9966b5c47", 00:17:42.189 "is_configured": true, 00:17:42.189 "data_offset": 2048, 00:17:42.189 "data_size": 63488 00:17:42.189 }, 00:17:42.189 { 00:17:42.189 "name": "BaseBdev2", 00:17:42.189 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:42.189 "is_configured": true, 00:17:42.189 "data_offset": 2048, 00:17:42.189 "data_size": 63488 00:17:42.189 }, 00:17:42.189 { 00:17:42.189 "name": "BaseBdev3", 00:17:42.189 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:42.189 "is_configured": true, 00:17:42.189 "data_offset": 2048, 00:17:42.189 "data_size": 63488 00:17:42.189 } 00:17:42.189 ] 00:17:42.190 }' 00:17:42.190 14:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.190 14:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:42.190 14:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.190 14:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:42.190 14:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:42.190 14:28:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.190 14:28:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.190 [2024-11-20 14:28:20.908687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:42.190 [2024-11-20 14:28:20.985854] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:42.190 [2024-11-20 14:28:20.985975] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.190 [2024-11-20 14:28:20.986044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:42.190 [2024-11-20 14:28:20.986059] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:42.190 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.190 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:42.190 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.190 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.190 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.190 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.190 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:42.190 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.190 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.190 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.190 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.190 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.190 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.190 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.190 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.190 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.190 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.190 "name": "raid_bdev1", 00:17:42.190 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:42.190 "strip_size_kb": 64, 00:17:42.190 "state": "online", 00:17:42.190 "raid_level": "raid5f", 00:17:42.190 "superblock": true, 00:17:42.190 "num_base_bdevs": 3, 00:17:42.190 "num_base_bdevs_discovered": 2, 00:17:42.190 "num_base_bdevs_operational": 2, 00:17:42.190 "base_bdevs_list": [ 00:17:42.190 { 00:17:42.190 "name": null, 00:17:42.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.190 "is_configured": false, 00:17:42.190 "data_offset": 0, 00:17:42.190 "data_size": 63488 00:17:42.190 }, 00:17:42.190 { 00:17:42.190 "name": "BaseBdev2", 00:17:42.190 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:42.190 "is_configured": true, 00:17:42.190 "data_offset": 2048, 00:17:42.190 "data_size": 63488 00:17:42.190 }, 00:17:42.190 { 00:17:42.190 "name": "BaseBdev3", 00:17:42.190 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:42.190 "is_configured": true, 00:17:42.190 "data_offset": 2048, 00:17:42.190 "data_size": 63488 00:17:42.190 } 00:17:42.190 ] 00:17:42.190 }' 00:17:42.190 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.190 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.784 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:42.784 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.784 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:42.784 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:42.784 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.784 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.784 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.784 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.784 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.784 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.784 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.784 "name": "raid_bdev1", 00:17:42.784 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:42.784 "strip_size_kb": 64, 00:17:42.784 "state": "online", 00:17:42.784 "raid_level": "raid5f", 00:17:42.784 "superblock": true, 00:17:42.784 "num_base_bdevs": 3, 00:17:42.784 "num_base_bdevs_discovered": 2, 00:17:42.784 "num_base_bdevs_operational": 2, 00:17:42.784 "base_bdevs_list": [ 00:17:42.784 { 00:17:42.784 "name": null, 00:17:42.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.784 "is_configured": false, 00:17:42.784 "data_offset": 0, 00:17:42.784 "data_size": 63488 00:17:42.784 }, 00:17:42.784 { 00:17:42.784 "name": "BaseBdev2", 00:17:42.784 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:42.784 "is_configured": true, 00:17:42.784 "data_offset": 2048, 00:17:42.784 "data_size": 63488 00:17:42.784 }, 00:17:42.784 { 00:17:42.784 "name": "BaseBdev3", 00:17:42.784 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:42.784 "is_configured": true, 00:17:42.784 "data_offset": 2048, 00:17:42.784 "data_size": 63488 00:17:42.784 } 00:17:42.784 ] 00:17:42.784 }' 00:17:42.784 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.784 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:42.784 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.784 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:42.784 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:42.784 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.784 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.784 [2024-11-20 14:28:21.677707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:42.784 [2024-11-20 14:28:21.692140] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:17:42.784 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.784 14:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:42.784 [2024-11-20 14:28:21.699301] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:43.740 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.740 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.740 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.740 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.740 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.740 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.740 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.740 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.740 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.740 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.999 "name": "raid_bdev1", 00:17:43.999 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:43.999 "strip_size_kb": 64, 00:17:43.999 "state": "online", 00:17:43.999 "raid_level": "raid5f", 00:17:43.999 "superblock": true, 00:17:43.999 "num_base_bdevs": 3, 00:17:43.999 "num_base_bdevs_discovered": 3, 00:17:43.999 "num_base_bdevs_operational": 3, 00:17:43.999 "process": { 00:17:43.999 "type": "rebuild", 00:17:43.999 "target": "spare", 00:17:43.999 "progress": { 00:17:43.999 "blocks": 18432, 00:17:43.999 "percent": 14 00:17:43.999 } 00:17:43.999 }, 00:17:43.999 "base_bdevs_list": [ 00:17:43.999 { 00:17:43.999 "name": "spare", 00:17:43.999 "uuid": "abbb22ca-6a32-5afb-8dcd-36f9966b5c47", 00:17:43.999 "is_configured": true, 00:17:43.999 "data_offset": 2048, 00:17:43.999 "data_size": 63488 00:17:43.999 }, 00:17:43.999 { 00:17:43.999 "name": "BaseBdev2", 00:17:43.999 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:43.999 "is_configured": true, 00:17:43.999 "data_offset": 2048, 00:17:43.999 "data_size": 63488 00:17:43.999 }, 00:17:43.999 { 00:17:43.999 "name": "BaseBdev3", 00:17:43.999 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:43.999 "is_configured": true, 00:17:43.999 "data_offset": 2048, 00:17:43.999 "data_size": 63488 00:17:43.999 } 00:17:43.999 ] 00:17:43.999 }' 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:43.999 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=609 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.999 "name": "raid_bdev1", 00:17:43.999 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:43.999 "strip_size_kb": 64, 00:17:43.999 "state": "online", 00:17:43.999 "raid_level": "raid5f", 00:17:43.999 "superblock": true, 00:17:43.999 "num_base_bdevs": 3, 00:17:43.999 "num_base_bdevs_discovered": 3, 00:17:43.999 "num_base_bdevs_operational": 3, 00:17:43.999 "process": { 00:17:43.999 "type": "rebuild", 00:17:43.999 "target": "spare", 00:17:43.999 "progress": { 00:17:43.999 "blocks": 22528, 00:17:43.999 "percent": 17 00:17:43.999 } 00:17:43.999 }, 00:17:43.999 "base_bdevs_list": [ 00:17:43.999 { 00:17:43.999 "name": "spare", 00:17:43.999 "uuid": "abbb22ca-6a32-5afb-8dcd-36f9966b5c47", 00:17:43.999 "is_configured": true, 00:17:43.999 "data_offset": 2048, 00:17:43.999 "data_size": 63488 00:17:43.999 }, 00:17:43.999 { 00:17:43.999 "name": "BaseBdev2", 00:17:43.999 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:43.999 "is_configured": true, 00:17:43.999 "data_offset": 2048, 00:17:43.999 "data_size": 63488 00:17:43.999 }, 00:17:43.999 { 00:17:43.999 "name": "BaseBdev3", 00:17:43.999 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:43.999 "is_configured": true, 00:17:43.999 "data_offset": 2048, 00:17:43.999 "data_size": 63488 00:17:43.999 } 00:17:43.999 ] 00:17:43.999 }' 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.999 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.258 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.258 14:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:45.194 14:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:45.194 14:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.194 14:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.194 14:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.194 14:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.194 14:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.194 14:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.194 14:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.194 14:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.194 14:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.194 14:28:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.194 14:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.194 "name": "raid_bdev1", 00:17:45.194 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:45.194 "strip_size_kb": 64, 00:17:45.194 "state": "online", 00:17:45.194 "raid_level": "raid5f", 00:17:45.194 "superblock": true, 00:17:45.194 "num_base_bdevs": 3, 00:17:45.194 "num_base_bdevs_discovered": 3, 00:17:45.194 "num_base_bdevs_operational": 3, 00:17:45.194 "process": { 00:17:45.194 "type": "rebuild", 00:17:45.194 "target": "spare", 00:17:45.194 "progress": { 00:17:45.194 "blocks": 45056, 00:17:45.194 "percent": 35 00:17:45.194 } 00:17:45.194 }, 00:17:45.194 "base_bdevs_list": [ 00:17:45.194 { 00:17:45.194 "name": "spare", 00:17:45.194 "uuid": "abbb22ca-6a32-5afb-8dcd-36f9966b5c47", 00:17:45.194 "is_configured": true, 00:17:45.194 "data_offset": 2048, 00:17:45.194 "data_size": 63488 00:17:45.194 }, 00:17:45.194 { 00:17:45.194 "name": "BaseBdev2", 00:17:45.194 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:45.194 "is_configured": true, 00:17:45.194 "data_offset": 2048, 00:17:45.194 "data_size": 63488 00:17:45.194 }, 00:17:45.194 { 00:17:45.194 "name": "BaseBdev3", 00:17:45.194 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:45.194 "is_configured": true, 00:17:45.194 "data_offset": 2048, 00:17:45.194 "data_size": 63488 00:17:45.194 } 00:17:45.194 ] 00:17:45.194 }' 00:17:45.194 14:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.194 14:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:45.194 14:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.194 14:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.194 14:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:46.570 14:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:46.570 14:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.570 14:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.570 14:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.570 14:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.570 14:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.570 14:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.570 14:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.570 14:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.570 14:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.570 14:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.570 14:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.570 "name": "raid_bdev1", 00:17:46.570 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:46.570 "strip_size_kb": 64, 00:17:46.570 "state": "online", 00:17:46.570 "raid_level": "raid5f", 00:17:46.570 "superblock": true, 00:17:46.570 "num_base_bdevs": 3, 00:17:46.570 "num_base_bdevs_discovered": 3, 00:17:46.570 "num_base_bdevs_operational": 3, 00:17:46.570 "process": { 00:17:46.570 "type": "rebuild", 00:17:46.570 "target": "spare", 00:17:46.570 "progress": { 00:17:46.570 "blocks": 69632, 00:17:46.570 "percent": 54 00:17:46.570 } 00:17:46.570 }, 00:17:46.570 "base_bdevs_list": [ 00:17:46.570 { 00:17:46.570 "name": "spare", 00:17:46.570 "uuid": "abbb22ca-6a32-5afb-8dcd-36f9966b5c47", 00:17:46.570 "is_configured": true, 00:17:46.570 "data_offset": 2048, 00:17:46.570 "data_size": 63488 00:17:46.570 }, 00:17:46.570 { 00:17:46.570 "name": "BaseBdev2", 00:17:46.570 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:46.570 "is_configured": true, 00:17:46.570 "data_offset": 2048, 00:17:46.570 "data_size": 63488 00:17:46.570 }, 00:17:46.570 { 00:17:46.570 "name": "BaseBdev3", 00:17:46.570 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:46.570 "is_configured": true, 00:17:46.570 "data_offset": 2048, 00:17:46.570 "data_size": 63488 00:17:46.570 } 00:17:46.570 ] 00:17:46.570 }' 00:17:46.570 14:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.570 14:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:46.570 14:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.570 14:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:46.570 14:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:47.506 14:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:47.506 14:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:47.506 14:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.506 14:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:47.506 14:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:47.506 14:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.506 14:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.506 14:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.506 14:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.506 14:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.506 14:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.506 14:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.506 "name": "raid_bdev1", 00:17:47.506 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:47.506 "strip_size_kb": 64, 00:17:47.506 "state": "online", 00:17:47.506 "raid_level": "raid5f", 00:17:47.506 "superblock": true, 00:17:47.506 "num_base_bdevs": 3, 00:17:47.506 "num_base_bdevs_discovered": 3, 00:17:47.506 "num_base_bdevs_operational": 3, 00:17:47.506 "process": { 00:17:47.507 "type": "rebuild", 00:17:47.507 "target": "spare", 00:17:47.507 "progress": { 00:17:47.507 "blocks": 92160, 00:17:47.507 "percent": 72 00:17:47.507 } 00:17:47.507 }, 00:17:47.507 "base_bdevs_list": [ 00:17:47.507 { 00:17:47.507 "name": "spare", 00:17:47.507 "uuid": "abbb22ca-6a32-5afb-8dcd-36f9966b5c47", 00:17:47.507 "is_configured": true, 00:17:47.507 "data_offset": 2048, 00:17:47.507 "data_size": 63488 00:17:47.507 }, 00:17:47.507 { 00:17:47.507 "name": "BaseBdev2", 00:17:47.507 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:47.507 "is_configured": true, 00:17:47.507 "data_offset": 2048, 00:17:47.507 "data_size": 63488 00:17:47.507 }, 00:17:47.507 { 00:17:47.507 "name": "BaseBdev3", 00:17:47.507 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:47.507 "is_configured": true, 00:17:47.507 "data_offset": 2048, 00:17:47.507 "data_size": 63488 00:17:47.507 } 00:17:47.507 ] 00:17:47.507 }' 00:17:47.507 14:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.507 14:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:47.507 14:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.507 14:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:47.507 14:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:48.883 14:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:48.883 14:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:48.883 14:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.883 14:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:48.883 14:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:48.883 14:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.883 14:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.883 14:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.883 14:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.883 14:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.883 14:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.883 14:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.883 "name": "raid_bdev1", 00:17:48.883 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:48.883 "strip_size_kb": 64, 00:17:48.883 "state": "online", 00:17:48.883 "raid_level": "raid5f", 00:17:48.883 "superblock": true, 00:17:48.883 "num_base_bdevs": 3, 00:17:48.883 "num_base_bdevs_discovered": 3, 00:17:48.883 "num_base_bdevs_operational": 3, 00:17:48.883 "process": { 00:17:48.883 "type": "rebuild", 00:17:48.883 "target": "spare", 00:17:48.883 "progress": { 00:17:48.883 "blocks": 114688, 00:17:48.883 "percent": 90 00:17:48.883 } 00:17:48.883 }, 00:17:48.883 "base_bdevs_list": [ 00:17:48.883 { 00:17:48.883 "name": "spare", 00:17:48.883 "uuid": "abbb22ca-6a32-5afb-8dcd-36f9966b5c47", 00:17:48.883 "is_configured": true, 00:17:48.883 "data_offset": 2048, 00:17:48.883 "data_size": 63488 00:17:48.883 }, 00:17:48.883 { 00:17:48.883 "name": "BaseBdev2", 00:17:48.883 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:48.883 "is_configured": true, 00:17:48.883 "data_offset": 2048, 00:17:48.883 "data_size": 63488 00:17:48.883 }, 00:17:48.883 { 00:17:48.883 "name": "BaseBdev3", 00:17:48.883 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:48.883 "is_configured": true, 00:17:48.883 "data_offset": 2048, 00:17:48.883 "data_size": 63488 00:17:48.883 } 00:17:48.883 ] 00:17:48.883 }' 00:17:48.883 14:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.883 14:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:48.883 14:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.883 14:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:48.883 14:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:49.139 [2024-11-20 14:28:27.977068] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:49.139 [2024-11-20 14:28:27.977216] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:49.139 [2024-11-20 14:28:27.977404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.706 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:49.706 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:49.706 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.706 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:49.706 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:49.706 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.706 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.706 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.706 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.706 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.706 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.706 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.706 "name": "raid_bdev1", 00:17:49.706 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:49.706 "strip_size_kb": 64, 00:17:49.706 "state": "online", 00:17:49.706 "raid_level": "raid5f", 00:17:49.706 "superblock": true, 00:17:49.706 "num_base_bdevs": 3, 00:17:49.706 "num_base_bdevs_discovered": 3, 00:17:49.706 "num_base_bdevs_operational": 3, 00:17:49.706 "base_bdevs_list": [ 00:17:49.706 { 00:17:49.706 "name": "spare", 00:17:49.706 "uuid": "abbb22ca-6a32-5afb-8dcd-36f9966b5c47", 00:17:49.706 "is_configured": true, 00:17:49.706 "data_offset": 2048, 00:17:49.706 "data_size": 63488 00:17:49.706 }, 00:17:49.706 { 00:17:49.706 "name": "BaseBdev2", 00:17:49.706 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:49.706 "is_configured": true, 00:17:49.706 "data_offset": 2048, 00:17:49.706 "data_size": 63488 00:17:49.706 }, 00:17:49.706 { 00:17:49.706 "name": "BaseBdev3", 00:17:49.706 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:49.706 "is_configured": true, 00:17:49.706 "data_offset": 2048, 00:17:49.706 "data_size": 63488 00:17:49.706 } 00:17:49.706 ] 00:17:49.706 }' 00:17:49.706 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.965 "name": "raid_bdev1", 00:17:49.965 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:49.965 "strip_size_kb": 64, 00:17:49.965 "state": "online", 00:17:49.965 "raid_level": "raid5f", 00:17:49.965 "superblock": true, 00:17:49.965 "num_base_bdevs": 3, 00:17:49.965 "num_base_bdevs_discovered": 3, 00:17:49.965 "num_base_bdevs_operational": 3, 00:17:49.965 "base_bdevs_list": [ 00:17:49.965 { 00:17:49.965 "name": "spare", 00:17:49.965 "uuid": "abbb22ca-6a32-5afb-8dcd-36f9966b5c47", 00:17:49.965 "is_configured": true, 00:17:49.965 "data_offset": 2048, 00:17:49.965 "data_size": 63488 00:17:49.965 }, 00:17:49.965 { 00:17:49.965 "name": "BaseBdev2", 00:17:49.965 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:49.965 "is_configured": true, 00:17:49.965 "data_offset": 2048, 00:17:49.965 "data_size": 63488 00:17:49.965 }, 00:17:49.965 { 00:17:49.965 "name": "BaseBdev3", 00:17:49.965 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:49.965 "is_configured": true, 00:17:49.965 "data_offset": 2048, 00:17:49.965 "data_size": 63488 00:17:49.965 } 00:17:49.965 ] 00:17:49.965 }' 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.965 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.224 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.224 "name": "raid_bdev1", 00:17:50.224 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:50.224 "strip_size_kb": 64, 00:17:50.224 "state": "online", 00:17:50.224 "raid_level": "raid5f", 00:17:50.224 "superblock": true, 00:17:50.224 "num_base_bdevs": 3, 00:17:50.224 "num_base_bdevs_discovered": 3, 00:17:50.224 "num_base_bdevs_operational": 3, 00:17:50.224 "base_bdevs_list": [ 00:17:50.224 { 00:17:50.224 "name": "spare", 00:17:50.224 "uuid": "abbb22ca-6a32-5afb-8dcd-36f9966b5c47", 00:17:50.224 "is_configured": true, 00:17:50.224 "data_offset": 2048, 00:17:50.224 "data_size": 63488 00:17:50.224 }, 00:17:50.224 { 00:17:50.224 "name": "BaseBdev2", 00:17:50.224 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:50.224 "is_configured": true, 00:17:50.224 "data_offset": 2048, 00:17:50.224 "data_size": 63488 00:17:50.224 }, 00:17:50.224 { 00:17:50.224 "name": "BaseBdev3", 00:17:50.224 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:50.224 "is_configured": true, 00:17:50.224 "data_offset": 2048, 00:17:50.224 "data_size": 63488 00:17:50.224 } 00:17:50.224 ] 00:17:50.224 }' 00:17:50.224 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.224 14:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.483 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:50.483 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.483 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.483 [2024-11-20 14:28:29.432776] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.483 [2024-11-20 14:28:29.432941] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.483 [2024-11-20 14:28:29.433090] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.483 [2024-11-20 14:28:29.433197] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.483 [2024-11-20 14:28:29.433223] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:50.483 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.483 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.483 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:50.483 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.483 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.483 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.742 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:50.742 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:50.742 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:50.742 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:50.742 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:50.742 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:50.742 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:50.742 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:50.742 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:50.742 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:50.742 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:50.742 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:50.742 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:51.000 /dev/nbd0 00:17:51.000 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:51.000 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:51.000 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:51.000 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:51.000 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:51.000 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:51.000 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:51.000 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:51.000 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:51.000 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:51.000 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:51.000 1+0 records in 00:17:51.000 1+0 records out 00:17:51.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551693 s, 7.4 MB/s 00:17:51.000 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:51.000 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:51.000 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:51.000 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:51.000 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:51.000 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:51.000 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:51.000 14:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:51.257 /dev/nbd1 00:17:51.257 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:51.257 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:51.257 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:51.257 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:51.257 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:51.257 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:51.257 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:51.257 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:51.257 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:51.257 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:51.257 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:51.257 1+0 records in 00:17:51.257 1+0 records out 00:17:51.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372897 s, 11.0 MB/s 00:17:51.257 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:51.257 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:51.257 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:51.257 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:51.257 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:51.257 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:51.257 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:51.257 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:51.514 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:51.514 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:51.514 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:51.514 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:51.514 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:51.514 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:51.514 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:51.772 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:51.772 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:51.772 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:51.772 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:51.772 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:51.772 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:51.772 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:51.772 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:51.772 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:51.772 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.077 [2024-11-20 14:28:30.849890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:52.077 [2024-11-20 14:28:30.849974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.077 [2024-11-20 14:28:30.850019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:52.077 [2024-11-20 14:28:30.850039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.077 [2024-11-20 14:28:30.852970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.077 [2024-11-20 14:28:30.853040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:52.077 [2024-11-20 14:28:30.853149] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:52.077 [2024-11-20 14:28:30.853218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:52.077 [2024-11-20 14:28:30.853401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:52.077 [2024-11-20 14:28:30.853570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:52.077 spare 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.077 [2024-11-20 14:28:30.953717] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:52.077 [2024-11-20 14:28:30.953795] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:52.077 [2024-11-20 14:28:30.954244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:17:52.077 [2024-11-20 14:28:30.959143] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:52.077 [2024-11-20 14:28:30.959177] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:52.077 [2024-11-20 14:28:30.959468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.077 14:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.077 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.077 "name": "raid_bdev1", 00:17:52.077 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:52.077 "strip_size_kb": 64, 00:17:52.077 "state": "online", 00:17:52.077 "raid_level": "raid5f", 00:17:52.077 "superblock": true, 00:17:52.077 "num_base_bdevs": 3, 00:17:52.077 "num_base_bdevs_discovered": 3, 00:17:52.077 "num_base_bdevs_operational": 3, 00:17:52.077 "base_bdevs_list": [ 00:17:52.077 { 00:17:52.077 "name": "spare", 00:17:52.077 "uuid": "abbb22ca-6a32-5afb-8dcd-36f9966b5c47", 00:17:52.077 "is_configured": true, 00:17:52.077 "data_offset": 2048, 00:17:52.077 "data_size": 63488 00:17:52.077 }, 00:17:52.077 { 00:17:52.077 "name": "BaseBdev2", 00:17:52.077 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:52.077 "is_configured": true, 00:17:52.077 "data_offset": 2048, 00:17:52.077 "data_size": 63488 00:17:52.077 }, 00:17:52.077 { 00:17:52.077 "name": "BaseBdev3", 00:17:52.077 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:52.077 "is_configured": true, 00:17:52.077 "data_offset": 2048, 00:17:52.077 "data_size": 63488 00:17:52.077 } 00:17:52.077 ] 00:17:52.077 }' 00:17:52.077 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.077 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.671 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:52.671 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.671 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:52.671 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:52.671 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.671 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.671 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.671 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.671 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.671 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.671 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.671 "name": "raid_bdev1", 00:17:52.671 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:52.671 "strip_size_kb": 64, 00:17:52.671 "state": "online", 00:17:52.671 "raid_level": "raid5f", 00:17:52.671 "superblock": true, 00:17:52.671 "num_base_bdevs": 3, 00:17:52.671 "num_base_bdevs_discovered": 3, 00:17:52.671 "num_base_bdevs_operational": 3, 00:17:52.671 "base_bdevs_list": [ 00:17:52.671 { 00:17:52.671 "name": "spare", 00:17:52.671 "uuid": "abbb22ca-6a32-5afb-8dcd-36f9966b5c47", 00:17:52.671 "is_configured": true, 00:17:52.671 "data_offset": 2048, 00:17:52.671 "data_size": 63488 00:17:52.671 }, 00:17:52.671 { 00:17:52.671 "name": "BaseBdev2", 00:17:52.671 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:52.671 "is_configured": true, 00:17:52.671 "data_offset": 2048, 00:17:52.671 "data_size": 63488 00:17:52.671 }, 00:17:52.671 { 00:17:52.671 "name": "BaseBdev3", 00:17:52.671 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:52.671 "is_configured": true, 00:17:52.671 "data_offset": 2048, 00:17:52.671 "data_size": 63488 00:17:52.671 } 00:17:52.671 ] 00:17:52.671 }' 00:17:52.671 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.671 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:52.671 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.671 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:52.671 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.671 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.671 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.671 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:52.671 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.934 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:52.934 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:52.934 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.934 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.934 [2024-11-20 14:28:31.661249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:52.934 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.934 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:52.934 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.934 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.934 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.934 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.934 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:52.934 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.934 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.934 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.934 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.934 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.934 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.934 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.934 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.934 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.934 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.934 "name": "raid_bdev1", 00:17:52.934 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:52.934 "strip_size_kb": 64, 00:17:52.934 "state": "online", 00:17:52.934 "raid_level": "raid5f", 00:17:52.934 "superblock": true, 00:17:52.934 "num_base_bdevs": 3, 00:17:52.934 "num_base_bdevs_discovered": 2, 00:17:52.934 "num_base_bdevs_operational": 2, 00:17:52.934 "base_bdevs_list": [ 00:17:52.934 { 00:17:52.934 "name": null, 00:17:52.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.934 "is_configured": false, 00:17:52.934 "data_offset": 0, 00:17:52.934 "data_size": 63488 00:17:52.934 }, 00:17:52.934 { 00:17:52.934 "name": "BaseBdev2", 00:17:52.934 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:52.934 "is_configured": true, 00:17:52.934 "data_offset": 2048, 00:17:52.934 "data_size": 63488 00:17:52.934 }, 00:17:52.934 { 00:17:52.934 "name": "BaseBdev3", 00:17:52.934 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:52.934 "is_configured": true, 00:17:52.934 "data_offset": 2048, 00:17:52.934 "data_size": 63488 00:17:52.934 } 00:17:52.934 ] 00:17:52.934 }' 00:17:52.934 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.934 14:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.205 14:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:53.205 14:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.205 14:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.205 [2024-11-20 14:28:32.101407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:53.205 [2024-11-20 14:28:32.101652] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:53.205 [2024-11-20 14:28:32.101691] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:53.205 [2024-11-20 14:28:32.101737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:53.205 [2024-11-20 14:28:32.115953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:17:53.205 14:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.205 14:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:53.205 [2024-11-20 14:28:32.123135] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.580 "name": "raid_bdev1", 00:17:54.580 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:54.580 "strip_size_kb": 64, 00:17:54.580 "state": "online", 00:17:54.580 "raid_level": "raid5f", 00:17:54.580 "superblock": true, 00:17:54.580 "num_base_bdevs": 3, 00:17:54.580 "num_base_bdevs_discovered": 3, 00:17:54.580 "num_base_bdevs_operational": 3, 00:17:54.580 "process": { 00:17:54.580 "type": "rebuild", 00:17:54.580 "target": "spare", 00:17:54.580 "progress": { 00:17:54.580 "blocks": 18432, 00:17:54.580 "percent": 14 00:17:54.580 } 00:17:54.580 }, 00:17:54.580 "base_bdevs_list": [ 00:17:54.580 { 00:17:54.580 "name": "spare", 00:17:54.580 "uuid": "abbb22ca-6a32-5afb-8dcd-36f9966b5c47", 00:17:54.580 "is_configured": true, 00:17:54.580 "data_offset": 2048, 00:17:54.580 "data_size": 63488 00:17:54.580 }, 00:17:54.580 { 00:17:54.580 "name": "BaseBdev2", 00:17:54.580 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:54.580 "is_configured": true, 00:17:54.580 "data_offset": 2048, 00:17:54.580 "data_size": 63488 00:17:54.580 }, 00:17:54.580 { 00:17:54.580 "name": "BaseBdev3", 00:17:54.580 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:54.580 "is_configured": true, 00:17:54.580 "data_offset": 2048, 00:17:54.580 "data_size": 63488 00:17:54.580 } 00:17:54.580 ] 00:17:54.580 }' 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.580 [2024-11-20 14:28:33.301389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:54.580 [2024-11-20 14:28:33.338903] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:54.580 [2024-11-20 14:28:33.339055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.580 [2024-11-20 14:28:33.339084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:54.580 [2024-11-20 14:28:33.339107] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.580 "name": "raid_bdev1", 00:17:54.580 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:54.580 "strip_size_kb": 64, 00:17:54.580 "state": "online", 00:17:54.580 "raid_level": "raid5f", 00:17:54.580 "superblock": true, 00:17:54.580 "num_base_bdevs": 3, 00:17:54.580 "num_base_bdevs_discovered": 2, 00:17:54.580 "num_base_bdevs_operational": 2, 00:17:54.580 "base_bdevs_list": [ 00:17:54.580 { 00:17:54.580 "name": null, 00:17:54.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.580 "is_configured": false, 00:17:54.580 "data_offset": 0, 00:17:54.580 "data_size": 63488 00:17:54.580 }, 00:17:54.580 { 00:17:54.580 "name": "BaseBdev2", 00:17:54.580 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:54.580 "is_configured": true, 00:17:54.580 "data_offset": 2048, 00:17:54.580 "data_size": 63488 00:17:54.580 }, 00:17:54.580 { 00:17:54.580 "name": "BaseBdev3", 00:17:54.580 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:54.580 "is_configured": true, 00:17:54.580 "data_offset": 2048, 00:17:54.580 "data_size": 63488 00:17:54.580 } 00:17:54.580 ] 00:17:54.580 }' 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.580 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.146 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:55.146 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.146 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.146 [2024-11-20 14:28:33.875109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:55.146 [2024-11-20 14:28:33.875196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.146 [2024-11-20 14:28:33.875226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:17:55.146 [2024-11-20 14:28:33.875246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.146 [2024-11-20 14:28:33.875896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.146 [2024-11-20 14:28:33.875941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:55.146 [2024-11-20 14:28:33.876079] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:55.146 [2024-11-20 14:28:33.876108] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:55.146 [2024-11-20 14:28:33.876122] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:55.146 [2024-11-20 14:28:33.876156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:55.146 [2024-11-20 14:28:33.890983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:17:55.146 spare 00:17:55.146 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.146 14:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:55.146 [2024-11-20 14:28:33.898297] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:56.081 14:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:56.081 14:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.081 14:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:56.081 14:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:56.081 14:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.081 14:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.081 14:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.081 14:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.081 14:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.081 14:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.081 14:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.081 "name": "raid_bdev1", 00:17:56.081 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:56.081 "strip_size_kb": 64, 00:17:56.081 "state": "online", 00:17:56.081 "raid_level": "raid5f", 00:17:56.081 "superblock": true, 00:17:56.081 "num_base_bdevs": 3, 00:17:56.081 "num_base_bdevs_discovered": 3, 00:17:56.081 "num_base_bdevs_operational": 3, 00:17:56.081 "process": { 00:17:56.081 "type": "rebuild", 00:17:56.081 "target": "spare", 00:17:56.081 "progress": { 00:17:56.081 "blocks": 18432, 00:17:56.081 "percent": 14 00:17:56.081 } 00:17:56.081 }, 00:17:56.081 "base_bdevs_list": [ 00:17:56.081 { 00:17:56.081 "name": "spare", 00:17:56.081 "uuid": "abbb22ca-6a32-5afb-8dcd-36f9966b5c47", 00:17:56.081 "is_configured": true, 00:17:56.081 "data_offset": 2048, 00:17:56.081 "data_size": 63488 00:17:56.081 }, 00:17:56.081 { 00:17:56.081 "name": "BaseBdev2", 00:17:56.081 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:56.081 "is_configured": true, 00:17:56.081 "data_offset": 2048, 00:17:56.081 "data_size": 63488 00:17:56.081 }, 00:17:56.081 { 00:17:56.081 "name": "BaseBdev3", 00:17:56.081 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:56.081 "is_configured": true, 00:17:56.081 "data_offset": 2048, 00:17:56.081 "data_size": 63488 00:17:56.081 } 00:17:56.081 ] 00:17:56.081 }' 00:17:56.081 14:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.081 14:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:56.081 14:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.081 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.081 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:56.081 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.081 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.081 [2024-11-20 14:28:35.044279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:56.340 [2024-11-20 14:28:35.112550] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:56.340 [2024-11-20 14:28:35.112707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.340 [2024-11-20 14:28:35.112737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:56.340 [2024-11-20 14:28:35.112749] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:56.340 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.340 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:56.340 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.340 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.340 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.340 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.340 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:56.340 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.340 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.340 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.340 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.340 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.340 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.340 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.340 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.340 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.340 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.340 "name": "raid_bdev1", 00:17:56.340 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:56.340 "strip_size_kb": 64, 00:17:56.340 "state": "online", 00:17:56.340 "raid_level": "raid5f", 00:17:56.340 "superblock": true, 00:17:56.340 "num_base_bdevs": 3, 00:17:56.340 "num_base_bdevs_discovered": 2, 00:17:56.340 "num_base_bdevs_operational": 2, 00:17:56.340 "base_bdevs_list": [ 00:17:56.340 { 00:17:56.340 "name": null, 00:17:56.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.340 "is_configured": false, 00:17:56.340 "data_offset": 0, 00:17:56.340 "data_size": 63488 00:17:56.340 }, 00:17:56.340 { 00:17:56.340 "name": "BaseBdev2", 00:17:56.340 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:56.340 "is_configured": true, 00:17:56.340 "data_offset": 2048, 00:17:56.340 "data_size": 63488 00:17:56.340 }, 00:17:56.340 { 00:17:56.340 "name": "BaseBdev3", 00:17:56.340 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:56.340 "is_configured": true, 00:17:56.340 "data_offset": 2048, 00:17:56.340 "data_size": 63488 00:17:56.340 } 00:17:56.340 ] 00:17:56.340 }' 00:17:56.340 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.340 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.908 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:56.908 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.908 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:56.908 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:56.908 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.908 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.908 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.908 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.908 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.908 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.908 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.908 "name": "raid_bdev1", 00:17:56.908 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:56.908 "strip_size_kb": 64, 00:17:56.908 "state": "online", 00:17:56.908 "raid_level": "raid5f", 00:17:56.908 "superblock": true, 00:17:56.908 "num_base_bdevs": 3, 00:17:56.908 "num_base_bdevs_discovered": 2, 00:17:56.908 "num_base_bdevs_operational": 2, 00:17:56.908 "base_bdevs_list": [ 00:17:56.908 { 00:17:56.908 "name": null, 00:17:56.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.908 "is_configured": false, 00:17:56.908 "data_offset": 0, 00:17:56.908 "data_size": 63488 00:17:56.908 }, 00:17:56.908 { 00:17:56.908 "name": "BaseBdev2", 00:17:56.908 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:56.908 "is_configured": true, 00:17:56.908 "data_offset": 2048, 00:17:56.908 "data_size": 63488 00:17:56.908 }, 00:17:56.908 { 00:17:56.908 "name": "BaseBdev3", 00:17:56.908 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:56.908 "is_configured": true, 00:17:56.908 "data_offset": 2048, 00:17:56.908 "data_size": 63488 00:17:56.908 } 00:17:56.908 ] 00:17:56.908 }' 00:17:56.908 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.908 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:56.908 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.908 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:56.908 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:56.908 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.908 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.908 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.908 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:56.908 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.908 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.908 [2024-11-20 14:28:35.824017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:56.908 [2024-11-20 14:28:35.824113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.908 [2024-11-20 14:28:35.824150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:56.908 [2024-11-20 14:28:35.824165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.908 [2024-11-20 14:28:35.824741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.908 [2024-11-20 14:28:35.824778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:56.908 [2024-11-20 14:28:35.824884] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:56.908 [2024-11-20 14:28:35.824906] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:56.908 [2024-11-20 14:28:35.824932] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:56.908 [2024-11-20 14:28:35.824945] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:56.908 BaseBdev1 00:17:56.908 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.908 14:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:58.284 14:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:58.284 14:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.284 14:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.284 14:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:58.284 14:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.284 14:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:58.284 14:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.284 14:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.285 14:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.285 14:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.285 14:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.285 14:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.285 14:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.285 14:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.285 14:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.285 14:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.285 "name": "raid_bdev1", 00:17:58.285 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:58.285 "strip_size_kb": 64, 00:17:58.285 "state": "online", 00:17:58.285 "raid_level": "raid5f", 00:17:58.285 "superblock": true, 00:17:58.285 "num_base_bdevs": 3, 00:17:58.285 "num_base_bdevs_discovered": 2, 00:17:58.285 "num_base_bdevs_operational": 2, 00:17:58.285 "base_bdevs_list": [ 00:17:58.285 { 00:17:58.285 "name": null, 00:17:58.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.285 "is_configured": false, 00:17:58.285 "data_offset": 0, 00:17:58.285 "data_size": 63488 00:17:58.285 }, 00:17:58.285 { 00:17:58.285 "name": "BaseBdev2", 00:17:58.285 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:58.285 "is_configured": true, 00:17:58.285 "data_offset": 2048, 00:17:58.285 "data_size": 63488 00:17:58.285 }, 00:17:58.285 { 00:17:58.285 "name": "BaseBdev3", 00:17:58.285 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:58.285 "is_configured": true, 00:17:58.285 "data_offset": 2048, 00:17:58.285 "data_size": 63488 00:17:58.285 } 00:17:58.285 ] 00:17:58.285 }' 00:17:58.285 14:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.285 14:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.544 "name": "raid_bdev1", 00:17:58.544 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:58.544 "strip_size_kb": 64, 00:17:58.544 "state": "online", 00:17:58.544 "raid_level": "raid5f", 00:17:58.544 "superblock": true, 00:17:58.544 "num_base_bdevs": 3, 00:17:58.544 "num_base_bdevs_discovered": 2, 00:17:58.544 "num_base_bdevs_operational": 2, 00:17:58.544 "base_bdevs_list": [ 00:17:58.544 { 00:17:58.544 "name": null, 00:17:58.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.544 "is_configured": false, 00:17:58.544 "data_offset": 0, 00:17:58.544 "data_size": 63488 00:17:58.544 }, 00:17:58.544 { 00:17:58.544 "name": "BaseBdev2", 00:17:58.544 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:58.544 "is_configured": true, 00:17:58.544 "data_offset": 2048, 00:17:58.544 "data_size": 63488 00:17:58.544 }, 00:17:58.544 { 00:17:58.544 "name": "BaseBdev3", 00:17:58.544 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:58.544 "is_configured": true, 00:17:58.544 "data_offset": 2048, 00:17:58.544 "data_size": 63488 00:17:58.544 } 00:17:58.544 ] 00:17:58.544 }' 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.544 [2024-11-20 14:28:37.512759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:58.544 [2024-11-20 14:28:37.512972] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:58.544 [2024-11-20 14:28:37.513022] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:58.544 request: 00:17:58.544 { 00:17:58.544 "base_bdev": "BaseBdev1", 00:17:58.544 "raid_bdev": "raid_bdev1", 00:17:58.544 "method": "bdev_raid_add_base_bdev", 00:17:58.544 "req_id": 1 00:17:58.544 } 00:17:58.544 Got JSON-RPC error response 00:17:58.544 response: 00:17:58.544 { 00:17:58.544 "code": -22, 00:17:58.544 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:58.544 } 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:58.544 14:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:59.923 14:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:59.923 14:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.923 14:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.923 14:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:59.923 14:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:59.923 14:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:59.923 14:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.923 14:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.923 14:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.923 14:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.923 14:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.923 14:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.923 14:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.923 14:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.923 14:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.923 14:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.923 "name": "raid_bdev1", 00:17:59.923 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:17:59.923 "strip_size_kb": 64, 00:17:59.923 "state": "online", 00:17:59.923 "raid_level": "raid5f", 00:17:59.923 "superblock": true, 00:17:59.923 "num_base_bdevs": 3, 00:17:59.923 "num_base_bdevs_discovered": 2, 00:17:59.923 "num_base_bdevs_operational": 2, 00:17:59.923 "base_bdevs_list": [ 00:17:59.923 { 00:17:59.923 "name": null, 00:17:59.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.923 "is_configured": false, 00:17:59.923 "data_offset": 0, 00:17:59.923 "data_size": 63488 00:17:59.923 }, 00:17:59.923 { 00:17:59.923 "name": "BaseBdev2", 00:17:59.923 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:17:59.923 "is_configured": true, 00:17:59.923 "data_offset": 2048, 00:17:59.923 "data_size": 63488 00:17:59.923 }, 00:17:59.923 { 00:17:59.923 "name": "BaseBdev3", 00:17:59.923 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:17:59.923 "is_configured": true, 00:17:59.923 "data_offset": 2048, 00:17:59.923 "data_size": 63488 00:17:59.923 } 00:17:59.923 ] 00:17:59.923 }' 00:17:59.923 14:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.923 14:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.182 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:00.182 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.182 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:00.182 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:00.182 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.182 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.182 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.182 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.182 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.182 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.182 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.182 "name": "raid_bdev1", 00:18:00.182 "uuid": "eddb02b4-5e76-4bce-b08e-f9b2a7b43b06", 00:18:00.182 "strip_size_kb": 64, 00:18:00.182 "state": "online", 00:18:00.182 "raid_level": "raid5f", 00:18:00.182 "superblock": true, 00:18:00.182 "num_base_bdevs": 3, 00:18:00.182 "num_base_bdevs_discovered": 2, 00:18:00.182 "num_base_bdevs_operational": 2, 00:18:00.182 "base_bdevs_list": [ 00:18:00.182 { 00:18:00.182 "name": null, 00:18:00.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.182 "is_configured": false, 00:18:00.182 "data_offset": 0, 00:18:00.182 "data_size": 63488 00:18:00.182 }, 00:18:00.182 { 00:18:00.182 "name": "BaseBdev2", 00:18:00.182 "uuid": "c2368dc2-f0ff-5019-b97b-19578e06c9b2", 00:18:00.182 "is_configured": true, 00:18:00.182 "data_offset": 2048, 00:18:00.182 "data_size": 63488 00:18:00.182 }, 00:18:00.182 { 00:18:00.182 "name": "BaseBdev3", 00:18:00.182 "uuid": "3c4359f3-0de9-5445-8499-eddb4065e2fd", 00:18:00.182 "is_configured": true, 00:18:00.182 "data_offset": 2048, 00:18:00.182 "data_size": 63488 00:18:00.182 } 00:18:00.182 ] 00:18:00.182 }' 00:18:00.182 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.182 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:00.182 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.440 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:00.440 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82436 00:18:00.440 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82436 ']' 00:18:00.440 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82436 00:18:00.440 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:00.440 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.440 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82436 00:18:00.440 killing process with pid 82436 00:18:00.440 Received shutdown signal, test time was about 60.000000 seconds 00:18:00.440 00:18:00.440 Latency(us) 00:18:00.440 [2024-11-20T14:28:39.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.440 [2024-11-20T14:28:39.422Z] =================================================================================================================== 00:18:00.440 [2024-11-20T14:28:39.422Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:00.440 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:00.440 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:00.440 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82436' 00:18:00.440 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82436 00:18:00.440 [2024-11-20 14:28:39.242084] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:00.440 14:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82436 00:18:00.440 [2024-11-20 14:28:39.242250] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.440 [2024-11-20 14:28:39.242335] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.440 [2024-11-20 14:28:39.242356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:00.698 [2024-11-20 14:28:39.615469] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:02.089 14:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:02.089 00:18:02.089 real 0m24.516s 00:18:02.089 user 0m32.432s 00:18:02.089 sys 0m2.474s 00:18:02.089 14:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.089 14:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.089 ************************************ 00:18:02.089 END TEST raid5f_rebuild_test_sb 00:18:02.089 ************************************ 00:18:02.089 14:28:40 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:18:02.089 14:28:40 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:18:02.089 14:28:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:02.089 14:28:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.089 14:28:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:02.089 ************************************ 00:18:02.089 START TEST raid5f_state_function_test 00:18:02.089 ************************************ 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83195 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:02.089 Process raid pid: 83195 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83195' 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83195 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83195 ']' 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.089 14:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.089 [2024-11-20 14:28:40.846534] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:18:02.089 [2024-11-20 14:28:40.846706] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.089 [2024-11-20 14:28:41.027934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.347 [2024-11-20 14:28:41.187187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.605 [2024-11-20 14:28:41.418055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.605 [2024-11-20 14:28:41.418116] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.862 14:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.862 14:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:18:02.862 14:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:02.862 14:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.862 14:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.862 [2024-11-20 14:28:41.798145] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:02.862 [2024-11-20 14:28:41.798221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:02.862 [2024-11-20 14:28:41.798238] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:02.862 [2024-11-20 14:28:41.798256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:02.862 [2024-11-20 14:28:41.798267] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:02.862 [2024-11-20 14:28:41.798281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:02.862 [2024-11-20 14:28:41.798291] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:02.862 [2024-11-20 14:28:41.798305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:02.862 14:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.862 14:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:02.862 14:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.862 14:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.862 14:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.862 14:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.862 14:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:02.862 14:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.862 14:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.862 14:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.862 14:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.862 14:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.862 14:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.862 14:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.862 14:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.862 14:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.119 14:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.119 "name": "Existed_Raid", 00:18:03.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.119 "strip_size_kb": 64, 00:18:03.119 "state": "configuring", 00:18:03.119 "raid_level": "raid5f", 00:18:03.119 "superblock": false, 00:18:03.119 "num_base_bdevs": 4, 00:18:03.119 "num_base_bdevs_discovered": 0, 00:18:03.119 "num_base_bdevs_operational": 4, 00:18:03.119 "base_bdevs_list": [ 00:18:03.119 { 00:18:03.119 "name": "BaseBdev1", 00:18:03.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.119 "is_configured": false, 00:18:03.119 "data_offset": 0, 00:18:03.119 "data_size": 0 00:18:03.119 }, 00:18:03.119 { 00:18:03.119 "name": "BaseBdev2", 00:18:03.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.119 "is_configured": false, 00:18:03.119 "data_offset": 0, 00:18:03.119 "data_size": 0 00:18:03.119 }, 00:18:03.119 { 00:18:03.119 "name": "BaseBdev3", 00:18:03.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.119 "is_configured": false, 00:18:03.119 "data_offset": 0, 00:18:03.119 "data_size": 0 00:18:03.119 }, 00:18:03.119 { 00:18:03.119 "name": "BaseBdev4", 00:18:03.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.119 "is_configured": false, 00:18:03.119 "data_offset": 0, 00:18:03.119 "data_size": 0 00:18:03.119 } 00:18:03.119 ] 00:18:03.119 }' 00:18:03.119 14:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.119 14:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.376 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:03.376 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.376 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.376 [2024-11-20 14:28:42.322224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:03.376 [2024-11-20 14:28:42.322279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:03.376 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.376 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:03.376 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.376 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.376 [2024-11-20 14:28:42.330190] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:03.376 [2024-11-20 14:28:42.330243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:03.376 [2024-11-20 14:28:42.330259] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:03.376 [2024-11-20 14:28:42.330276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:03.376 [2024-11-20 14:28:42.330286] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:03.376 [2024-11-20 14:28:42.330301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:03.376 [2024-11-20 14:28:42.330311] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:03.376 [2024-11-20 14:28:42.330325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:03.376 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.376 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:03.376 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.376 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.632 [2024-11-20 14:28:42.375091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:03.632 BaseBdev1 00:18:03.632 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.632 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:03.632 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:03.632 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:03.632 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:03.632 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:03.632 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:03.632 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:03.632 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.632 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.632 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.632 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:03.632 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.632 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.632 [ 00:18:03.632 { 00:18:03.632 "name": "BaseBdev1", 00:18:03.632 "aliases": [ 00:18:03.632 "bd9d0c20-edd6-47cf-8389-66c7755628f9" 00:18:03.632 ], 00:18:03.632 "product_name": "Malloc disk", 00:18:03.632 "block_size": 512, 00:18:03.632 "num_blocks": 65536, 00:18:03.632 "uuid": "bd9d0c20-edd6-47cf-8389-66c7755628f9", 00:18:03.632 "assigned_rate_limits": { 00:18:03.632 "rw_ios_per_sec": 0, 00:18:03.632 "rw_mbytes_per_sec": 0, 00:18:03.632 "r_mbytes_per_sec": 0, 00:18:03.632 "w_mbytes_per_sec": 0 00:18:03.632 }, 00:18:03.632 "claimed": true, 00:18:03.632 "claim_type": "exclusive_write", 00:18:03.633 "zoned": false, 00:18:03.633 "supported_io_types": { 00:18:03.633 "read": true, 00:18:03.633 "write": true, 00:18:03.633 "unmap": true, 00:18:03.633 "flush": true, 00:18:03.633 "reset": true, 00:18:03.633 "nvme_admin": false, 00:18:03.633 "nvme_io": false, 00:18:03.633 "nvme_io_md": false, 00:18:03.633 "write_zeroes": true, 00:18:03.633 "zcopy": true, 00:18:03.633 "get_zone_info": false, 00:18:03.633 "zone_management": false, 00:18:03.633 "zone_append": false, 00:18:03.633 "compare": false, 00:18:03.633 "compare_and_write": false, 00:18:03.633 "abort": true, 00:18:03.633 "seek_hole": false, 00:18:03.633 "seek_data": false, 00:18:03.633 "copy": true, 00:18:03.633 "nvme_iov_md": false 00:18:03.633 }, 00:18:03.633 "memory_domains": [ 00:18:03.633 { 00:18:03.633 "dma_device_id": "system", 00:18:03.633 "dma_device_type": 1 00:18:03.633 }, 00:18:03.633 { 00:18:03.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.633 "dma_device_type": 2 00:18:03.633 } 00:18:03.633 ], 00:18:03.633 "driver_specific": {} 00:18:03.633 } 00:18:03.633 ] 00:18:03.633 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.633 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:03.633 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:03.633 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:03.633 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:03.633 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:03.633 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.633 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:03.633 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.633 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.633 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.633 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.633 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.633 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.633 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.633 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.633 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.633 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.633 "name": "Existed_Raid", 00:18:03.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.633 "strip_size_kb": 64, 00:18:03.633 "state": "configuring", 00:18:03.633 "raid_level": "raid5f", 00:18:03.633 "superblock": false, 00:18:03.633 "num_base_bdevs": 4, 00:18:03.633 "num_base_bdevs_discovered": 1, 00:18:03.633 "num_base_bdevs_operational": 4, 00:18:03.633 "base_bdevs_list": [ 00:18:03.633 { 00:18:03.633 "name": "BaseBdev1", 00:18:03.633 "uuid": "bd9d0c20-edd6-47cf-8389-66c7755628f9", 00:18:03.633 "is_configured": true, 00:18:03.633 "data_offset": 0, 00:18:03.633 "data_size": 65536 00:18:03.633 }, 00:18:03.633 { 00:18:03.633 "name": "BaseBdev2", 00:18:03.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.633 "is_configured": false, 00:18:03.633 "data_offset": 0, 00:18:03.633 "data_size": 0 00:18:03.633 }, 00:18:03.633 { 00:18:03.633 "name": "BaseBdev3", 00:18:03.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.633 "is_configured": false, 00:18:03.633 "data_offset": 0, 00:18:03.633 "data_size": 0 00:18:03.633 }, 00:18:03.633 { 00:18:03.633 "name": "BaseBdev4", 00:18:03.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.633 "is_configured": false, 00:18:03.633 "data_offset": 0, 00:18:03.633 "data_size": 0 00:18:03.633 } 00:18:03.633 ] 00:18:03.633 }' 00:18:03.633 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.633 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.199 [2024-11-20 14:28:42.959316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:04.199 [2024-11-20 14:28:42.959395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.199 [2024-11-20 14:28:42.967379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:04.199 [2024-11-20 14:28:42.969807] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:04.199 [2024-11-20 14:28:42.969862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:04.199 [2024-11-20 14:28:42.969879] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:04.199 [2024-11-20 14:28:42.969897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:04.199 [2024-11-20 14:28:42.969908] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:04.199 [2024-11-20 14:28:42.969922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.199 14:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.199 14:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.199 "name": "Existed_Raid", 00:18:04.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.199 "strip_size_kb": 64, 00:18:04.199 "state": "configuring", 00:18:04.199 "raid_level": "raid5f", 00:18:04.199 "superblock": false, 00:18:04.199 "num_base_bdevs": 4, 00:18:04.199 "num_base_bdevs_discovered": 1, 00:18:04.199 "num_base_bdevs_operational": 4, 00:18:04.199 "base_bdevs_list": [ 00:18:04.199 { 00:18:04.199 "name": "BaseBdev1", 00:18:04.199 "uuid": "bd9d0c20-edd6-47cf-8389-66c7755628f9", 00:18:04.199 "is_configured": true, 00:18:04.199 "data_offset": 0, 00:18:04.199 "data_size": 65536 00:18:04.199 }, 00:18:04.199 { 00:18:04.199 "name": "BaseBdev2", 00:18:04.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.199 "is_configured": false, 00:18:04.199 "data_offset": 0, 00:18:04.199 "data_size": 0 00:18:04.199 }, 00:18:04.199 { 00:18:04.199 "name": "BaseBdev3", 00:18:04.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.199 "is_configured": false, 00:18:04.199 "data_offset": 0, 00:18:04.199 "data_size": 0 00:18:04.199 }, 00:18:04.199 { 00:18:04.199 "name": "BaseBdev4", 00:18:04.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.199 "is_configured": false, 00:18:04.199 "data_offset": 0, 00:18:04.199 "data_size": 0 00:18:04.199 } 00:18:04.199 ] 00:18:04.199 }' 00:18:04.199 14:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.199 14:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.766 [2024-11-20 14:28:43.521822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:04.766 BaseBdev2 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.766 [ 00:18:04.766 { 00:18:04.766 "name": "BaseBdev2", 00:18:04.766 "aliases": [ 00:18:04.766 "3d24aa44-6cec-494a-80d2-7e1c2df03be3" 00:18:04.766 ], 00:18:04.766 "product_name": "Malloc disk", 00:18:04.766 "block_size": 512, 00:18:04.766 "num_blocks": 65536, 00:18:04.766 "uuid": "3d24aa44-6cec-494a-80d2-7e1c2df03be3", 00:18:04.766 "assigned_rate_limits": { 00:18:04.766 "rw_ios_per_sec": 0, 00:18:04.766 "rw_mbytes_per_sec": 0, 00:18:04.766 "r_mbytes_per_sec": 0, 00:18:04.766 "w_mbytes_per_sec": 0 00:18:04.766 }, 00:18:04.766 "claimed": true, 00:18:04.766 "claim_type": "exclusive_write", 00:18:04.766 "zoned": false, 00:18:04.766 "supported_io_types": { 00:18:04.766 "read": true, 00:18:04.766 "write": true, 00:18:04.766 "unmap": true, 00:18:04.766 "flush": true, 00:18:04.766 "reset": true, 00:18:04.766 "nvme_admin": false, 00:18:04.766 "nvme_io": false, 00:18:04.766 "nvme_io_md": false, 00:18:04.766 "write_zeroes": true, 00:18:04.766 "zcopy": true, 00:18:04.766 "get_zone_info": false, 00:18:04.766 "zone_management": false, 00:18:04.766 "zone_append": false, 00:18:04.766 "compare": false, 00:18:04.766 "compare_and_write": false, 00:18:04.766 "abort": true, 00:18:04.766 "seek_hole": false, 00:18:04.766 "seek_data": false, 00:18:04.766 "copy": true, 00:18:04.766 "nvme_iov_md": false 00:18:04.766 }, 00:18:04.766 "memory_domains": [ 00:18:04.766 { 00:18:04.766 "dma_device_id": "system", 00:18:04.766 "dma_device_type": 1 00:18:04.766 }, 00:18:04.766 { 00:18:04.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.766 "dma_device_type": 2 00:18:04.766 } 00:18:04.766 ], 00:18:04.766 "driver_specific": {} 00:18:04.766 } 00:18:04.766 ] 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.766 "name": "Existed_Raid", 00:18:04.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.766 "strip_size_kb": 64, 00:18:04.766 "state": "configuring", 00:18:04.766 "raid_level": "raid5f", 00:18:04.766 "superblock": false, 00:18:04.766 "num_base_bdevs": 4, 00:18:04.766 "num_base_bdevs_discovered": 2, 00:18:04.766 "num_base_bdevs_operational": 4, 00:18:04.766 "base_bdevs_list": [ 00:18:04.766 { 00:18:04.766 "name": "BaseBdev1", 00:18:04.766 "uuid": "bd9d0c20-edd6-47cf-8389-66c7755628f9", 00:18:04.766 "is_configured": true, 00:18:04.766 "data_offset": 0, 00:18:04.766 "data_size": 65536 00:18:04.766 }, 00:18:04.766 { 00:18:04.766 "name": "BaseBdev2", 00:18:04.766 "uuid": "3d24aa44-6cec-494a-80d2-7e1c2df03be3", 00:18:04.766 "is_configured": true, 00:18:04.766 "data_offset": 0, 00:18:04.766 "data_size": 65536 00:18:04.766 }, 00:18:04.766 { 00:18:04.766 "name": "BaseBdev3", 00:18:04.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.766 "is_configured": false, 00:18:04.766 "data_offset": 0, 00:18:04.766 "data_size": 0 00:18:04.766 }, 00:18:04.766 { 00:18:04.766 "name": "BaseBdev4", 00:18:04.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.766 "is_configured": false, 00:18:04.766 "data_offset": 0, 00:18:04.766 "data_size": 0 00:18:04.766 } 00:18:04.766 ] 00:18:04.766 }' 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.766 14:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.333 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:05.333 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.333 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.333 [2024-11-20 14:28:44.145092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:05.333 BaseBdev3 00:18:05.333 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.333 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:05.333 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:05.333 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:05.333 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:05.333 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:05.333 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.334 [ 00:18:05.334 { 00:18:05.334 "name": "BaseBdev3", 00:18:05.334 "aliases": [ 00:18:05.334 "70049c5c-782c-459a-a420-6324cff4fd5a" 00:18:05.334 ], 00:18:05.334 "product_name": "Malloc disk", 00:18:05.334 "block_size": 512, 00:18:05.334 "num_blocks": 65536, 00:18:05.334 "uuid": "70049c5c-782c-459a-a420-6324cff4fd5a", 00:18:05.334 "assigned_rate_limits": { 00:18:05.334 "rw_ios_per_sec": 0, 00:18:05.334 "rw_mbytes_per_sec": 0, 00:18:05.334 "r_mbytes_per_sec": 0, 00:18:05.334 "w_mbytes_per_sec": 0 00:18:05.334 }, 00:18:05.334 "claimed": true, 00:18:05.334 "claim_type": "exclusive_write", 00:18:05.334 "zoned": false, 00:18:05.334 "supported_io_types": { 00:18:05.334 "read": true, 00:18:05.334 "write": true, 00:18:05.334 "unmap": true, 00:18:05.334 "flush": true, 00:18:05.334 "reset": true, 00:18:05.334 "nvme_admin": false, 00:18:05.334 "nvme_io": false, 00:18:05.334 "nvme_io_md": false, 00:18:05.334 "write_zeroes": true, 00:18:05.334 "zcopy": true, 00:18:05.334 "get_zone_info": false, 00:18:05.334 "zone_management": false, 00:18:05.334 "zone_append": false, 00:18:05.334 "compare": false, 00:18:05.334 "compare_and_write": false, 00:18:05.334 "abort": true, 00:18:05.334 "seek_hole": false, 00:18:05.334 "seek_data": false, 00:18:05.334 "copy": true, 00:18:05.334 "nvme_iov_md": false 00:18:05.334 }, 00:18:05.334 "memory_domains": [ 00:18:05.334 { 00:18:05.334 "dma_device_id": "system", 00:18:05.334 "dma_device_type": 1 00:18:05.334 }, 00:18:05.334 { 00:18:05.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.334 "dma_device_type": 2 00:18:05.334 } 00:18:05.334 ], 00:18:05.334 "driver_specific": {} 00:18:05.334 } 00:18:05.334 ] 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.334 "name": "Existed_Raid", 00:18:05.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.334 "strip_size_kb": 64, 00:18:05.334 "state": "configuring", 00:18:05.334 "raid_level": "raid5f", 00:18:05.334 "superblock": false, 00:18:05.334 "num_base_bdevs": 4, 00:18:05.334 "num_base_bdevs_discovered": 3, 00:18:05.334 "num_base_bdevs_operational": 4, 00:18:05.334 "base_bdevs_list": [ 00:18:05.334 { 00:18:05.334 "name": "BaseBdev1", 00:18:05.334 "uuid": "bd9d0c20-edd6-47cf-8389-66c7755628f9", 00:18:05.334 "is_configured": true, 00:18:05.334 "data_offset": 0, 00:18:05.334 "data_size": 65536 00:18:05.334 }, 00:18:05.334 { 00:18:05.334 "name": "BaseBdev2", 00:18:05.334 "uuid": "3d24aa44-6cec-494a-80d2-7e1c2df03be3", 00:18:05.334 "is_configured": true, 00:18:05.334 "data_offset": 0, 00:18:05.334 "data_size": 65536 00:18:05.334 }, 00:18:05.334 { 00:18:05.334 "name": "BaseBdev3", 00:18:05.334 "uuid": "70049c5c-782c-459a-a420-6324cff4fd5a", 00:18:05.334 "is_configured": true, 00:18:05.334 "data_offset": 0, 00:18:05.334 "data_size": 65536 00:18:05.334 }, 00:18:05.334 { 00:18:05.334 "name": "BaseBdev4", 00:18:05.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.334 "is_configured": false, 00:18:05.334 "data_offset": 0, 00:18:05.334 "data_size": 0 00:18:05.334 } 00:18:05.334 ] 00:18:05.334 }' 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.334 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.903 [2024-11-20 14:28:44.711718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:05.903 [2024-11-20 14:28:44.711814] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:05.903 [2024-11-20 14:28:44.711831] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:05.903 [2024-11-20 14:28:44.712192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:05.903 [2024-11-20 14:28:44.719076] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:05.903 [2024-11-20 14:28:44.719133] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:05.903 [2024-11-20 14:28:44.719492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.903 BaseBdev4 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.903 [ 00:18:05.903 { 00:18:05.903 "name": "BaseBdev4", 00:18:05.903 "aliases": [ 00:18:05.903 "927786b3-05c4-44c8-89c6-4792b50f7af5" 00:18:05.903 ], 00:18:05.903 "product_name": "Malloc disk", 00:18:05.903 "block_size": 512, 00:18:05.903 "num_blocks": 65536, 00:18:05.903 "uuid": "927786b3-05c4-44c8-89c6-4792b50f7af5", 00:18:05.903 "assigned_rate_limits": { 00:18:05.903 "rw_ios_per_sec": 0, 00:18:05.903 "rw_mbytes_per_sec": 0, 00:18:05.903 "r_mbytes_per_sec": 0, 00:18:05.903 "w_mbytes_per_sec": 0 00:18:05.903 }, 00:18:05.903 "claimed": true, 00:18:05.903 "claim_type": "exclusive_write", 00:18:05.903 "zoned": false, 00:18:05.903 "supported_io_types": { 00:18:05.903 "read": true, 00:18:05.903 "write": true, 00:18:05.903 "unmap": true, 00:18:05.903 "flush": true, 00:18:05.903 "reset": true, 00:18:05.903 "nvme_admin": false, 00:18:05.903 "nvme_io": false, 00:18:05.903 "nvme_io_md": false, 00:18:05.903 "write_zeroes": true, 00:18:05.903 "zcopy": true, 00:18:05.903 "get_zone_info": false, 00:18:05.903 "zone_management": false, 00:18:05.903 "zone_append": false, 00:18:05.903 "compare": false, 00:18:05.903 "compare_and_write": false, 00:18:05.903 "abort": true, 00:18:05.903 "seek_hole": false, 00:18:05.903 "seek_data": false, 00:18:05.903 "copy": true, 00:18:05.903 "nvme_iov_md": false 00:18:05.903 }, 00:18:05.903 "memory_domains": [ 00:18:05.903 { 00:18:05.903 "dma_device_id": "system", 00:18:05.903 "dma_device_type": 1 00:18:05.903 }, 00:18:05.903 { 00:18:05.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.903 "dma_device_type": 2 00:18:05.903 } 00:18:05.903 ], 00:18:05.903 "driver_specific": {} 00:18:05.903 } 00:18:05.903 ] 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:05.903 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.904 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.904 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.904 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.904 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.904 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.904 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.904 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.904 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.904 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.904 "name": "Existed_Raid", 00:18:05.904 "uuid": "9339c641-1e88-443c-a5d6-24d8bd16aed4", 00:18:05.904 "strip_size_kb": 64, 00:18:05.904 "state": "online", 00:18:05.904 "raid_level": "raid5f", 00:18:05.904 "superblock": false, 00:18:05.904 "num_base_bdevs": 4, 00:18:05.904 "num_base_bdevs_discovered": 4, 00:18:05.904 "num_base_bdevs_operational": 4, 00:18:05.904 "base_bdevs_list": [ 00:18:05.904 { 00:18:05.904 "name": "BaseBdev1", 00:18:05.904 "uuid": "bd9d0c20-edd6-47cf-8389-66c7755628f9", 00:18:05.904 "is_configured": true, 00:18:05.904 "data_offset": 0, 00:18:05.904 "data_size": 65536 00:18:05.904 }, 00:18:05.904 { 00:18:05.904 "name": "BaseBdev2", 00:18:05.904 "uuid": "3d24aa44-6cec-494a-80d2-7e1c2df03be3", 00:18:05.904 "is_configured": true, 00:18:05.904 "data_offset": 0, 00:18:05.904 "data_size": 65536 00:18:05.904 }, 00:18:05.904 { 00:18:05.904 "name": "BaseBdev3", 00:18:05.904 "uuid": "70049c5c-782c-459a-a420-6324cff4fd5a", 00:18:05.904 "is_configured": true, 00:18:05.904 "data_offset": 0, 00:18:05.904 "data_size": 65536 00:18:05.904 }, 00:18:05.904 { 00:18:05.904 "name": "BaseBdev4", 00:18:05.904 "uuid": "927786b3-05c4-44c8-89c6-4792b50f7af5", 00:18:05.904 "is_configured": true, 00:18:05.904 "data_offset": 0, 00:18:05.904 "data_size": 65536 00:18:05.904 } 00:18:05.904 ] 00:18:05.904 }' 00:18:05.904 14:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.904 14:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.480 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:06.480 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:06.480 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:06.480 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:06.480 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:06.480 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:06.480 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:06.480 14:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.480 14:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.480 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:06.480 [2024-11-20 14:28:45.251238] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.480 14:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.480 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:06.480 "name": "Existed_Raid", 00:18:06.480 "aliases": [ 00:18:06.480 "9339c641-1e88-443c-a5d6-24d8bd16aed4" 00:18:06.480 ], 00:18:06.480 "product_name": "Raid Volume", 00:18:06.480 "block_size": 512, 00:18:06.480 "num_blocks": 196608, 00:18:06.480 "uuid": "9339c641-1e88-443c-a5d6-24d8bd16aed4", 00:18:06.480 "assigned_rate_limits": { 00:18:06.480 "rw_ios_per_sec": 0, 00:18:06.480 "rw_mbytes_per_sec": 0, 00:18:06.480 "r_mbytes_per_sec": 0, 00:18:06.480 "w_mbytes_per_sec": 0 00:18:06.480 }, 00:18:06.480 "claimed": false, 00:18:06.480 "zoned": false, 00:18:06.480 "supported_io_types": { 00:18:06.480 "read": true, 00:18:06.480 "write": true, 00:18:06.480 "unmap": false, 00:18:06.480 "flush": false, 00:18:06.480 "reset": true, 00:18:06.480 "nvme_admin": false, 00:18:06.480 "nvme_io": false, 00:18:06.480 "nvme_io_md": false, 00:18:06.480 "write_zeroes": true, 00:18:06.480 "zcopy": false, 00:18:06.480 "get_zone_info": false, 00:18:06.480 "zone_management": false, 00:18:06.480 "zone_append": false, 00:18:06.480 "compare": false, 00:18:06.480 "compare_and_write": false, 00:18:06.480 "abort": false, 00:18:06.480 "seek_hole": false, 00:18:06.480 "seek_data": false, 00:18:06.480 "copy": false, 00:18:06.480 "nvme_iov_md": false 00:18:06.480 }, 00:18:06.480 "driver_specific": { 00:18:06.480 "raid": { 00:18:06.480 "uuid": "9339c641-1e88-443c-a5d6-24d8bd16aed4", 00:18:06.480 "strip_size_kb": 64, 00:18:06.480 "state": "online", 00:18:06.480 "raid_level": "raid5f", 00:18:06.480 "superblock": false, 00:18:06.480 "num_base_bdevs": 4, 00:18:06.480 "num_base_bdevs_discovered": 4, 00:18:06.480 "num_base_bdevs_operational": 4, 00:18:06.480 "base_bdevs_list": [ 00:18:06.480 { 00:18:06.480 "name": "BaseBdev1", 00:18:06.480 "uuid": "bd9d0c20-edd6-47cf-8389-66c7755628f9", 00:18:06.480 "is_configured": true, 00:18:06.480 "data_offset": 0, 00:18:06.480 "data_size": 65536 00:18:06.480 }, 00:18:06.480 { 00:18:06.480 "name": "BaseBdev2", 00:18:06.480 "uuid": "3d24aa44-6cec-494a-80d2-7e1c2df03be3", 00:18:06.480 "is_configured": true, 00:18:06.480 "data_offset": 0, 00:18:06.480 "data_size": 65536 00:18:06.480 }, 00:18:06.480 { 00:18:06.480 "name": "BaseBdev3", 00:18:06.480 "uuid": "70049c5c-782c-459a-a420-6324cff4fd5a", 00:18:06.480 "is_configured": true, 00:18:06.480 "data_offset": 0, 00:18:06.480 "data_size": 65536 00:18:06.480 }, 00:18:06.480 { 00:18:06.480 "name": "BaseBdev4", 00:18:06.480 "uuid": "927786b3-05c4-44c8-89c6-4792b50f7af5", 00:18:06.480 "is_configured": true, 00:18:06.480 "data_offset": 0, 00:18:06.480 "data_size": 65536 00:18:06.480 } 00:18:06.480 ] 00:18:06.480 } 00:18:06.480 } 00:18:06.480 }' 00:18:06.480 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:06.480 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:06.480 BaseBdev2 00:18:06.480 BaseBdev3 00:18:06.480 BaseBdev4' 00:18:06.480 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.480 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:06.481 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:06.481 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:06.481 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.481 14:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.481 14:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.481 14:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.481 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:06.481 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:06.481 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:06.481 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.481 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:06.481 14:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.481 14:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.739 [2024-11-20 14:28:45.575088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.739 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:06.740 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.740 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.740 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.740 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.740 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.740 14:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.740 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.740 14:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.740 14:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.740 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.740 "name": "Existed_Raid", 00:18:06.740 "uuid": "9339c641-1e88-443c-a5d6-24d8bd16aed4", 00:18:06.740 "strip_size_kb": 64, 00:18:06.740 "state": "online", 00:18:06.740 "raid_level": "raid5f", 00:18:06.740 "superblock": false, 00:18:06.740 "num_base_bdevs": 4, 00:18:06.740 "num_base_bdevs_discovered": 3, 00:18:06.740 "num_base_bdevs_operational": 3, 00:18:06.740 "base_bdevs_list": [ 00:18:06.740 { 00:18:06.740 "name": null, 00:18:06.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.740 "is_configured": false, 00:18:06.740 "data_offset": 0, 00:18:06.740 "data_size": 65536 00:18:06.740 }, 00:18:06.740 { 00:18:06.740 "name": "BaseBdev2", 00:18:06.740 "uuid": "3d24aa44-6cec-494a-80d2-7e1c2df03be3", 00:18:06.740 "is_configured": true, 00:18:06.740 "data_offset": 0, 00:18:06.740 "data_size": 65536 00:18:06.740 }, 00:18:06.740 { 00:18:06.740 "name": "BaseBdev3", 00:18:06.740 "uuid": "70049c5c-782c-459a-a420-6324cff4fd5a", 00:18:06.740 "is_configured": true, 00:18:06.740 "data_offset": 0, 00:18:06.740 "data_size": 65536 00:18:06.740 }, 00:18:06.740 { 00:18:06.740 "name": "BaseBdev4", 00:18:06.740 "uuid": "927786b3-05c4-44c8-89c6-4792b50f7af5", 00:18:06.740 "is_configured": true, 00:18:06.740 "data_offset": 0, 00:18:06.740 "data_size": 65536 00:18:06.740 } 00:18:06.740 ] 00:18:06.740 }' 00:18:06.740 14:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.740 14:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.306 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:07.306 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:07.306 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:07.306 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.306 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.306 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.306 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.306 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:07.306 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:07.306 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:07.306 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.306 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.306 [2024-11-20 14:28:46.232562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:07.306 [2024-11-20 14:28:46.232701] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.565 [2024-11-20 14:28:46.321597] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.565 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.565 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:07.565 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:07.565 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:07.565 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.565 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.565 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.565 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.565 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:07.565 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:07.565 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:07.565 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.565 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.565 [2024-11-20 14:28:46.373621] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:07.565 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.565 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:07.565 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:07.565 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.565 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.565 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.565 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:07.565 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.825 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:07.825 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:07.825 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:07.825 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.825 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.825 [2024-11-20 14:28:46.568640] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:07.825 [2024-11-20 14:28:46.568714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:07.825 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.825 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:07.825 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:07.825 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.825 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.825 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.825 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:07.825 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.826 BaseBdev2 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.826 [ 00:18:07.826 { 00:18:07.826 "name": "BaseBdev2", 00:18:07.826 "aliases": [ 00:18:07.826 "4c9181cb-1bae-4940-b297-0e579c700d9e" 00:18:07.826 ], 00:18:07.826 "product_name": "Malloc disk", 00:18:07.826 "block_size": 512, 00:18:07.826 "num_blocks": 65536, 00:18:07.826 "uuid": "4c9181cb-1bae-4940-b297-0e579c700d9e", 00:18:07.826 "assigned_rate_limits": { 00:18:07.826 "rw_ios_per_sec": 0, 00:18:07.826 "rw_mbytes_per_sec": 0, 00:18:07.826 "r_mbytes_per_sec": 0, 00:18:07.826 "w_mbytes_per_sec": 0 00:18:07.826 }, 00:18:07.826 "claimed": false, 00:18:07.826 "zoned": false, 00:18:07.826 "supported_io_types": { 00:18:07.826 "read": true, 00:18:07.826 "write": true, 00:18:07.826 "unmap": true, 00:18:07.826 "flush": true, 00:18:07.826 "reset": true, 00:18:07.826 "nvme_admin": false, 00:18:07.826 "nvme_io": false, 00:18:07.826 "nvme_io_md": false, 00:18:07.826 "write_zeroes": true, 00:18:07.826 "zcopy": true, 00:18:07.826 "get_zone_info": false, 00:18:07.826 "zone_management": false, 00:18:07.826 "zone_append": false, 00:18:07.826 "compare": false, 00:18:07.826 "compare_and_write": false, 00:18:07.826 "abort": true, 00:18:07.826 "seek_hole": false, 00:18:07.826 "seek_data": false, 00:18:07.826 "copy": true, 00:18:07.826 "nvme_iov_md": false 00:18:07.826 }, 00:18:07.826 "memory_domains": [ 00:18:07.826 { 00:18:07.826 "dma_device_id": "system", 00:18:07.826 "dma_device_type": 1 00:18:07.826 }, 00:18:07.826 { 00:18:07.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.826 "dma_device_type": 2 00:18:07.826 } 00:18:07.826 ], 00:18:07.826 "driver_specific": {} 00:18:07.826 } 00:18:07.826 ] 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.826 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.086 BaseBdev3 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.086 [ 00:18:08.086 { 00:18:08.086 "name": "BaseBdev3", 00:18:08.086 "aliases": [ 00:18:08.086 "daddfe56-2294-47f0-bd84-19bd8410c9f3" 00:18:08.086 ], 00:18:08.086 "product_name": "Malloc disk", 00:18:08.086 "block_size": 512, 00:18:08.086 "num_blocks": 65536, 00:18:08.086 "uuid": "daddfe56-2294-47f0-bd84-19bd8410c9f3", 00:18:08.086 "assigned_rate_limits": { 00:18:08.086 "rw_ios_per_sec": 0, 00:18:08.086 "rw_mbytes_per_sec": 0, 00:18:08.086 "r_mbytes_per_sec": 0, 00:18:08.086 "w_mbytes_per_sec": 0 00:18:08.086 }, 00:18:08.086 "claimed": false, 00:18:08.086 "zoned": false, 00:18:08.086 "supported_io_types": { 00:18:08.086 "read": true, 00:18:08.086 "write": true, 00:18:08.086 "unmap": true, 00:18:08.086 "flush": true, 00:18:08.086 "reset": true, 00:18:08.086 "nvme_admin": false, 00:18:08.086 "nvme_io": false, 00:18:08.086 "nvme_io_md": false, 00:18:08.086 "write_zeroes": true, 00:18:08.086 "zcopy": true, 00:18:08.086 "get_zone_info": false, 00:18:08.086 "zone_management": false, 00:18:08.086 "zone_append": false, 00:18:08.086 "compare": false, 00:18:08.086 "compare_and_write": false, 00:18:08.086 "abort": true, 00:18:08.086 "seek_hole": false, 00:18:08.086 "seek_data": false, 00:18:08.086 "copy": true, 00:18:08.086 "nvme_iov_md": false 00:18:08.086 }, 00:18:08.086 "memory_domains": [ 00:18:08.086 { 00:18:08.086 "dma_device_id": "system", 00:18:08.086 "dma_device_type": 1 00:18:08.086 }, 00:18:08.086 { 00:18:08.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.086 "dma_device_type": 2 00:18:08.086 } 00:18:08.086 ], 00:18:08.086 "driver_specific": {} 00:18:08.086 } 00:18:08.086 ] 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.086 BaseBdev4 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.086 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.086 [ 00:18:08.086 { 00:18:08.086 "name": "BaseBdev4", 00:18:08.086 "aliases": [ 00:18:08.086 "b0e3086b-0022-4ec6-8bc5-0049372416e6" 00:18:08.086 ], 00:18:08.086 "product_name": "Malloc disk", 00:18:08.086 "block_size": 512, 00:18:08.086 "num_blocks": 65536, 00:18:08.086 "uuid": "b0e3086b-0022-4ec6-8bc5-0049372416e6", 00:18:08.086 "assigned_rate_limits": { 00:18:08.086 "rw_ios_per_sec": 0, 00:18:08.086 "rw_mbytes_per_sec": 0, 00:18:08.086 "r_mbytes_per_sec": 0, 00:18:08.086 "w_mbytes_per_sec": 0 00:18:08.086 }, 00:18:08.086 "claimed": false, 00:18:08.086 "zoned": false, 00:18:08.086 "supported_io_types": { 00:18:08.086 "read": true, 00:18:08.086 "write": true, 00:18:08.086 "unmap": true, 00:18:08.087 "flush": true, 00:18:08.087 "reset": true, 00:18:08.087 "nvme_admin": false, 00:18:08.087 "nvme_io": false, 00:18:08.087 "nvme_io_md": false, 00:18:08.087 "write_zeroes": true, 00:18:08.087 "zcopy": true, 00:18:08.087 "get_zone_info": false, 00:18:08.087 "zone_management": false, 00:18:08.087 "zone_append": false, 00:18:08.087 "compare": false, 00:18:08.087 "compare_and_write": false, 00:18:08.087 "abort": true, 00:18:08.087 "seek_hole": false, 00:18:08.087 "seek_data": false, 00:18:08.087 "copy": true, 00:18:08.087 "nvme_iov_md": false 00:18:08.087 }, 00:18:08.087 "memory_domains": [ 00:18:08.087 { 00:18:08.087 "dma_device_id": "system", 00:18:08.087 "dma_device_type": 1 00:18:08.087 }, 00:18:08.087 { 00:18:08.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.087 "dma_device_type": 2 00:18:08.087 } 00:18:08.087 ], 00:18:08.087 "driver_specific": {} 00:18:08.087 } 00:18:08.087 ] 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.087 [2024-11-20 14:28:46.949344] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:08.087 [2024-11-20 14:28:46.949400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:08.087 [2024-11-20 14:28:46.949432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:08.087 [2024-11-20 14:28:46.951982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:08.087 [2024-11-20 14:28:46.952076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.087 "name": "Existed_Raid", 00:18:08.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.087 "strip_size_kb": 64, 00:18:08.087 "state": "configuring", 00:18:08.087 "raid_level": "raid5f", 00:18:08.087 "superblock": false, 00:18:08.087 "num_base_bdevs": 4, 00:18:08.087 "num_base_bdevs_discovered": 3, 00:18:08.087 "num_base_bdevs_operational": 4, 00:18:08.087 "base_bdevs_list": [ 00:18:08.087 { 00:18:08.087 "name": "BaseBdev1", 00:18:08.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.087 "is_configured": false, 00:18:08.087 "data_offset": 0, 00:18:08.087 "data_size": 0 00:18:08.087 }, 00:18:08.087 { 00:18:08.087 "name": "BaseBdev2", 00:18:08.087 "uuid": "4c9181cb-1bae-4940-b297-0e579c700d9e", 00:18:08.087 "is_configured": true, 00:18:08.087 "data_offset": 0, 00:18:08.087 "data_size": 65536 00:18:08.087 }, 00:18:08.087 { 00:18:08.087 "name": "BaseBdev3", 00:18:08.087 "uuid": "daddfe56-2294-47f0-bd84-19bd8410c9f3", 00:18:08.087 "is_configured": true, 00:18:08.087 "data_offset": 0, 00:18:08.087 "data_size": 65536 00:18:08.087 }, 00:18:08.087 { 00:18:08.087 "name": "BaseBdev4", 00:18:08.087 "uuid": "b0e3086b-0022-4ec6-8bc5-0049372416e6", 00:18:08.087 "is_configured": true, 00:18:08.087 "data_offset": 0, 00:18:08.087 "data_size": 65536 00:18:08.087 } 00:18:08.087 ] 00:18:08.087 }' 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.087 14:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.654 14:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:08.654 14:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.654 14:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.654 [2024-11-20 14:28:47.481549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:08.654 14:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.654 14:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:08.654 14:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:08.654 14:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:08.654 14:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:08.654 14:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:08.654 14:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:08.654 14:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.654 14:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.654 14:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.655 14:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.655 14:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.655 14:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.655 14:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.655 14:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.655 14:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.655 14:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.655 "name": "Existed_Raid", 00:18:08.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.655 "strip_size_kb": 64, 00:18:08.655 "state": "configuring", 00:18:08.655 "raid_level": "raid5f", 00:18:08.655 "superblock": false, 00:18:08.655 "num_base_bdevs": 4, 00:18:08.655 "num_base_bdevs_discovered": 2, 00:18:08.655 "num_base_bdevs_operational": 4, 00:18:08.655 "base_bdevs_list": [ 00:18:08.655 { 00:18:08.655 "name": "BaseBdev1", 00:18:08.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.655 "is_configured": false, 00:18:08.655 "data_offset": 0, 00:18:08.655 "data_size": 0 00:18:08.655 }, 00:18:08.655 { 00:18:08.655 "name": null, 00:18:08.655 "uuid": "4c9181cb-1bae-4940-b297-0e579c700d9e", 00:18:08.655 "is_configured": false, 00:18:08.655 "data_offset": 0, 00:18:08.655 "data_size": 65536 00:18:08.655 }, 00:18:08.655 { 00:18:08.655 "name": "BaseBdev3", 00:18:08.655 "uuid": "daddfe56-2294-47f0-bd84-19bd8410c9f3", 00:18:08.655 "is_configured": true, 00:18:08.655 "data_offset": 0, 00:18:08.655 "data_size": 65536 00:18:08.655 }, 00:18:08.655 { 00:18:08.655 "name": "BaseBdev4", 00:18:08.655 "uuid": "b0e3086b-0022-4ec6-8bc5-0049372416e6", 00:18:08.655 "is_configured": true, 00:18:08.655 "data_offset": 0, 00:18:08.655 "data_size": 65536 00:18:08.655 } 00:18:08.655 ] 00:18:08.655 }' 00:18:08.655 14:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.655 14:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.222 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.222 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:09.222 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.222 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.222 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.222 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:09.222 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:09.222 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.222 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.481 [2024-11-20 14:28:48.204257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:09.481 BaseBdev1 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.481 [ 00:18:09.481 { 00:18:09.481 "name": "BaseBdev1", 00:18:09.481 "aliases": [ 00:18:09.481 "362dd995-db71-4b14-9068-19e6a580271e" 00:18:09.481 ], 00:18:09.481 "product_name": "Malloc disk", 00:18:09.481 "block_size": 512, 00:18:09.481 "num_blocks": 65536, 00:18:09.481 "uuid": "362dd995-db71-4b14-9068-19e6a580271e", 00:18:09.481 "assigned_rate_limits": { 00:18:09.481 "rw_ios_per_sec": 0, 00:18:09.481 "rw_mbytes_per_sec": 0, 00:18:09.481 "r_mbytes_per_sec": 0, 00:18:09.481 "w_mbytes_per_sec": 0 00:18:09.481 }, 00:18:09.481 "claimed": true, 00:18:09.481 "claim_type": "exclusive_write", 00:18:09.481 "zoned": false, 00:18:09.481 "supported_io_types": { 00:18:09.481 "read": true, 00:18:09.481 "write": true, 00:18:09.481 "unmap": true, 00:18:09.481 "flush": true, 00:18:09.481 "reset": true, 00:18:09.481 "nvme_admin": false, 00:18:09.481 "nvme_io": false, 00:18:09.481 "nvme_io_md": false, 00:18:09.481 "write_zeroes": true, 00:18:09.481 "zcopy": true, 00:18:09.481 "get_zone_info": false, 00:18:09.481 "zone_management": false, 00:18:09.481 "zone_append": false, 00:18:09.481 "compare": false, 00:18:09.481 "compare_and_write": false, 00:18:09.481 "abort": true, 00:18:09.481 "seek_hole": false, 00:18:09.481 "seek_data": false, 00:18:09.481 "copy": true, 00:18:09.481 "nvme_iov_md": false 00:18:09.481 }, 00:18:09.481 "memory_domains": [ 00:18:09.481 { 00:18:09.481 "dma_device_id": "system", 00:18:09.481 "dma_device_type": 1 00:18:09.481 }, 00:18:09.481 { 00:18:09.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.481 "dma_device_type": 2 00:18:09.481 } 00:18:09.481 ], 00:18:09.481 "driver_specific": {} 00:18:09.481 } 00:18:09.481 ] 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.481 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.481 "name": "Existed_Raid", 00:18:09.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.481 "strip_size_kb": 64, 00:18:09.481 "state": "configuring", 00:18:09.481 "raid_level": "raid5f", 00:18:09.481 "superblock": false, 00:18:09.481 "num_base_bdevs": 4, 00:18:09.481 "num_base_bdevs_discovered": 3, 00:18:09.481 "num_base_bdevs_operational": 4, 00:18:09.481 "base_bdevs_list": [ 00:18:09.481 { 00:18:09.481 "name": "BaseBdev1", 00:18:09.481 "uuid": "362dd995-db71-4b14-9068-19e6a580271e", 00:18:09.481 "is_configured": true, 00:18:09.481 "data_offset": 0, 00:18:09.481 "data_size": 65536 00:18:09.481 }, 00:18:09.481 { 00:18:09.481 "name": null, 00:18:09.481 "uuid": "4c9181cb-1bae-4940-b297-0e579c700d9e", 00:18:09.481 "is_configured": false, 00:18:09.481 "data_offset": 0, 00:18:09.481 "data_size": 65536 00:18:09.481 }, 00:18:09.481 { 00:18:09.481 "name": "BaseBdev3", 00:18:09.481 "uuid": "daddfe56-2294-47f0-bd84-19bd8410c9f3", 00:18:09.482 "is_configured": true, 00:18:09.482 "data_offset": 0, 00:18:09.482 "data_size": 65536 00:18:09.482 }, 00:18:09.482 { 00:18:09.482 "name": "BaseBdev4", 00:18:09.482 "uuid": "b0e3086b-0022-4ec6-8bc5-0049372416e6", 00:18:09.482 "is_configured": true, 00:18:09.482 "data_offset": 0, 00:18:09.482 "data_size": 65536 00:18:09.482 } 00:18:09.482 ] 00:18:09.482 }' 00:18:09.482 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.482 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.049 [2024-11-20 14:28:48.812645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.049 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.049 "name": "Existed_Raid", 00:18:10.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.049 "strip_size_kb": 64, 00:18:10.049 "state": "configuring", 00:18:10.049 "raid_level": "raid5f", 00:18:10.049 "superblock": false, 00:18:10.049 "num_base_bdevs": 4, 00:18:10.049 "num_base_bdevs_discovered": 2, 00:18:10.050 "num_base_bdevs_operational": 4, 00:18:10.050 "base_bdevs_list": [ 00:18:10.050 { 00:18:10.050 "name": "BaseBdev1", 00:18:10.050 "uuid": "362dd995-db71-4b14-9068-19e6a580271e", 00:18:10.050 "is_configured": true, 00:18:10.050 "data_offset": 0, 00:18:10.050 "data_size": 65536 00:18:10.050 }, 00:18:10.050 { 00:18:10.050 "name": null, 00:18:10.050 "uuid": "4c9181cb-1bae-4940-b297-0e579c700d9e", 00:18:10.050 "is_configured": false, 00:18:10.050 "data_offset": 0, 00:18:10.050 "data_size": 65536 00:18:10.050 }, 00:18:10.050 { 00:18:10.050 "name": null, 00:18:10.050 "uuid": "daddfe56-2294-47f0-bd84-19bd8410c9f3", 00:18:10.050 "is_configured": false, 00:18:10.050 "data_offset": 0, 00:18:10.050 "data_size": 65536 00:18:10.050 }, 00:18:10.050 { 00:18:10.050 "name": "BaseBdev4", 00:18:10.050 "uuid": "b0e3086b-0022-4ec6-8bc5-0049372416e6", 00:18:10.050 "is_configured": true, 00:18:10.050 "data_offset": 0, 00:18:10.050 "data_size": 65536 00:18:10.050 } 00:18:10.050 ] 00:18:10.050 }' 00:18:10.050 14:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.050 14:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.636 14:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:10.636 14:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.636 14:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.636 14:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.636 14:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.636 14:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:10.636 14:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:10.636 14:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.636 14:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.636 [2024-11-20 14:28:49.396817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:10.636 14:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.636 14:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:10.636 14:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:10.637 14:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:10.637 14:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:10.637 14:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.637 14:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:10.637 14:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.637 14:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.637 14:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.637 14:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.637 14:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.637 14:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.637 14:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.637 14:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.637 14:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.637 14:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.637 "name": "Existed_Raid", 00:18:10.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.637 "strip_size_kb": 64, 00:18:10.637 "state": "configuring", 00:18:10.637 "raid_level": "raid5f", 00:18:10.637 "superblock": false, 00:18:10.637 "num_base_bdevs": 4, 00:18:10.637 "num_base_bdevs_discovered": 3, 00:18:10.637 "num_base_bdevs_operational": 4, 00:18:10.637 "base_bdevs_list": [ 00:18:10.637 { 00:18:10.637 "name": "BaseBdev1", 00:18:10.637 "uuid": "362dd995-db71-4b14-9068-19e6a580271e", 00:18:10.637 "is_configured": true, 00:18:10.637 "data_offset": 0, 00:18:10.637 "data_size": 65536 00:18:10.637 }, 00:18:10.637 { 00:18:10.637 "name": null, 00:18:10.637 "uuid": "4c9181cb-1bae-4940-b297-0e579c700d9e", 00:18:10.637 "is_configured": false, 00:18:10.637 "data_offset": 0, 00:18:10.637 "data_size": 65536 00:18:10.637 }, 00:18:10.637 { 00:18:10.637 "name": "BaseBdev3", 00:18:10.637 "uuid": "daddfe56-2294-47f0-bd84-19bd8410c9f3", 00:18:10.637 "is_configured": true, 00:18:10.637 "data_offset": 0, 00:18:10.637 "data_size": 65536 00:18:10.637 }, 00:18:10.637 { 00:18:10.637 "name": "BaseBdev4", 00:18:10.637 "uuid": "b0e3086b-0022-4ec6-8bc5-0049372416e6", 00:18:10.637 "is_configured": true, 00:18:10.637 "data_offset": 0, 00:18:10.637 "data_size": 65536 00:18:10.637 } 00:18:10.637 ] 00:18:10.637 }' 00:18:10.637 14:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.637 14:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.204 14:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.204 14:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.204 14:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.204 14:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:11.204 14:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.204 14:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:11.204 14:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:11.204 14:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.204 14:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.204 [2024-11-20 14:28:49.977024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:11.204 14:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.204 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:11.204 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:11.204 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:11.204 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:11.204 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:11.204 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:11.204 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.204 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.204 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.204 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.204 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.204 14:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.204 14:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.204 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.204 14:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.204 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.204 "name": "Existed_Raid", 00:18:11.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.204 "strip_size_kb": 64, 00:18:11.204 "state": "configuring", 00:18:11.204 "raid_level": "raid5f", 00:18:11.204 "superblock": false, 00:18:11.204 "num_base_bdevs": 4, 00:18:11.204 "num_base_bdevs_discovered": 2, 00:18:11.204 "num_base_bdevs_operational": 4, 00:18:11.204 "base_bdevs_list": [ 00:18:11.204 { 00:18:11.204 "name": null, 00:18:11.204 "uuid": "362dd995-db71-4b14-9068-19e6a580271e", 00:18:11.204 "is_configured": false, 00:18:11.204 "data_offset": 0, 00:18:11.204 "data_size": 65536 00:18:11.204 }, 00:18:11.204 { 00:18:11.204 "name": null, 00:18:11.204 "uuid": "4c9181cb-1bae-4940-b297-0e579c700d9e", 00:18:11.204 "is_configured": false, 00:18:11.204 "data_offset": 0, 00:18:11.204 "data_size": 65536 00:18:11.204 }, 00:18:11.204 { 00:18:11.204 "name": "BaseBdev3", 00:18:11.204 "uuid": "daddfe56-2294-47f0-bd84-19bd8410c9f3", 00:18:11.204 "is_configured": true, 00:18:11.204 "data_offset": 0, 00:18:11.204 "data_size": 65536 00:18:11.204 }, 00:18:11.204 { 00:18:11.204 "name": "BaseBdev4", 00:18:11.204 "uuid": "b0e3086b-0022-4ec6-8bc5-0049372416e6", 00:18:11.204 "is_configured": true, 00:18:11.204 "data_offset": 0, 00:18:11.204 "data_size": 65536 00:18:11.204 } 00:18:11.204 ] 00:18:11.204 }' 00:18:11.204 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.204 14:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.832 [2024-11-20 14:28:50.665174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.832 "name": "Existed_Raid", 00:18:11.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.832 "strip_size_kb": 64, 00:18:11.832 "state": "configuring", 00:18:11.832 "raid_level": "raid5f", 00:18:11.832 "superblock": false, 00:18:11.832 "num_base_bdevs": 4, 00:18:11.832 "num_base_bdevs_discovered": 3, 00:18:11.832 "num_base_bdevs_operational": 4, 00:18:11.832 "base_bdevs_list": [ 00:18:11.832 { 00:18:11.832 "name": null, 00:18:11.832 "uuid": "362dd995-db71-4b14-9068-19e6a580271e", 00:18:11.832 "is_configured": false, 00:18:11.832 "data_offset": 0, 00:18:11.832 "data_size": 65536 00:18:11.832 }, 00:18:11.832 { 00:18:11.832 "name": "BaseBdev2", 00:18:11.832 "uuid": "4c9181cb-1bae-4940-b297-0e579c700d9e", 00:18:11.832 "is_configured": true, 00:18:11.832 "data_offset": 0, 00:18:11.832 "data_size": 65536 00:18:11.832 }, 00:18:11.832 { 00:18:11.832 "name": "BaseBdev3", 00:18:11.832 "uuid": "daddfe56-2294-47f0-bd84-19bd8410c9f3", 00:18:11.832 "is_configured": true, 00:18:11.832 "data_offset": 0, 00:18:11.832 "data_size": 65536 00:18:11.832 }, 00:18:11.832 { 00:18:11.832 "name": "BaseBdev4", 00:18:11.832 "uuid": "b0e3086b-0022-4ec6-8bc5-0049372416e6", 00:18:11.832 "is_configured": true, 00:18:11.832 "data_offset": 0, 00:18:11.832 "data_size": 65536 00:18:11.832 } 00:18:11.832 ] 00:18:11.832 }' 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.832 14:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 362dd995-db71-4b14-9068-19e6a580271e 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.401 [2024-11-20 14:28:51.347191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:12.401 [2024-11-20 14:28:51.347264] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:12.401 [2024-11-20 14:28:51.347277] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:12.401 [2024-11-20 14:28:51.347629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:12.401 [2024-11-20 14:28:51.354250] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:12.401 [2024-11-20 14:28:51.354429] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:12.401 [2024-11-20 14:28:51.354978] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.401 NewBaseBdev 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.401 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.661 [ 00:18:12.661 { 00:18:12.661 "name": "NewBaseBdev", 00:18:12.661 "aliases": [ 00:18:12.661 "362dd995-db71-4b14-9068-19e6a580271e" 00:18:12.661 ], 00:18:12.661 "product_name": "Malloc disk", 00:18:12.661 "block_size": 512, 00:18:12.661 "num_blocks": 65536, 00:18:12.661 "uuid": "362dd995-db71-4b14-9068-19e6a580271e", 00:18:12.661 "assigned_rate_limits": { 00:18:12.661 "rw_ios_per_sec": 0, 00:18:12.661 "rw_mbytes_per_sec": 0, 00:18:12.661 "r_mbytes_per_sec": 0, 00:18:12.661 "w_mbytes_per_sec": 0 00:18:12.661 }, 00:18:12.661 "claimed": true, 00:18:12.661 "claim_type": "exclusive_write", 00:18:12.661 "zoned": false, 00:18:12.661 "supported_io_types": { 00:18:12.661 "read": true, 00:18:12.661 "write": true, 00:18:12.661 "unmap": true, 00:18:12.661 "flush": true, 00:18:12.661 "reset": true, 00:18:12.661 "nvme_admin": false, 00:18:12.661 "nvme_io": false, 00:18:12.661 "nvme_io_md": false, 00:18:12.661 "write_zeroes": true, 00:18:12.661 "zcopy": true, 00:18:12.661 "get_zone_info": false, 00:18:12.661 "zone_management": false, 00:18:12.661 "zone_append": false, 00:18:12.661 "compare": false, 00:18:12.661 "compare_and_write": false, 00:18:12.661 "abort": true, 00:18:12.661 "seek_hole": false, 00:18:12.661 "seek_data": false, 00:18:12.661 "copy": true, 00:18:12.661 "nvme_iov_md": false 00:18:12.661 }, 00:18:12.661 "memory_domains": [ 00:18:12.661 { 00:18:12.661 "dma_device_id": "system", 00:18:12.661 "dma_device_type": 1 00:18:12.661 }, 00:18:12.661 { 00:18:12.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.661 "dma_device_type": 2 00:18:12.661 } 00:18:12.661 ], 00:18:12.661 "driver_specific": {} 00:18:12.661 } 00:18:12.661 ] 00:18:12.661 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.661 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:12.661 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:12.661 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:12.661 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.661 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:12.661 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.661 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:12.661 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.661 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.661 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.661 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.661 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.661 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.661 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.661 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.661 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.661 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.661 "name": "Existed_Raid", 00:18:12.661 "uuid": "d67813ac-490c-47c1-b3e3-3cfa7686b30c", 00:18:12.661 "strip_size_kb": 64, 00:18:12.661 "state": "online", 00:18:12.661 "raid_level": "raid5f", 00:18:12.661 "superblock": false, 00:18:12.661 "num_base_bdevs": 4, 00:18:12.661 "num_base_bdevs_discovered": 4, 00:18:12.661 "num_base_bdevs_operational": 4, 00:18:12.661 "base_bdevs_list": [ 00:18:12.661 { 00:18:12.661 "name": "NewBaseBdev", 00:18:12.661 "uuid": "362dd995-db71-4b14-9068-19e6a580271e", 00:18:12.661 "is_configured": true, 00:18:12.661 "data_offset": 0, 00:18:12.661 "data_size": 65536 00:18:12.661 }, 00:18:12.661 { 00:18:12.661 "name": "BaseBdev2", 00:18:12.661 "uuid": "4c9181cb-1bae-4940-b297-0e579c700d9e", 00:18:12.661 "is_configured": true, 00:18:12.661 "data_offset": 0, 00:18:12.661 "data_size": 65536 00:18:12.661 }, 00:18:12.661 { 00:18:12.661 "name": "BaseBdev3", 00:18:12.661 "uuid": "daddfe56-2294-47f0-bd84-19bd8410c9f3", 00:18:12.661 "is_configured": true, 00:18:12.661 "data_offset": 0, 00:18:12.661 "data_size": 65536 00:18:12.661 }, 00:18:12.661 { 00:18:12.661 "name": "BaseBdev4", 00:18:12.661 "uuid": "b0e3086b-0022-4ec6-8bc5-0049372416e6", 00:18:12.661 "is_configured": true, 00:18:12.661 "data_offset": 0, 00:18:12.661 "data_size": 65536 00:18:12.661 } 00:18:12.661 ] 00:18:12.661 }' 00:18:12.661 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.661 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.230 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:13.230 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:13.230 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:13.230 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:13.230 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:13.230 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:13.230 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:13.230 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.230 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.230 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:13.230 [2024-11-20 14:28:51.939292] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:13.230 14:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.230 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:13.230 "name": "Existed_Raid", 00:18:13.230 "aliases": [ 00:18:13.230 "d67813ac-490c-47c1-b3e3-3cfa7686b30c" 00:18:13.230 ], 00:18:13.230 "product_name": "Raid Volume", 00:18:13.230 "block_size": 512, 00:18:13.230 "num_blocks": 196608, 00:18:13.230 "uuid": "d67813ac-490c-47c1-b3e3-3cfa7686b30c", 00:18:13.230 "assigned_rate_limits": { 00:18:13.230 "rw_ios_per_sec": 0, 00:18:13.230 "rw_mbytes_per_sec": 0, 00:18:13.230 "r_mbytes_per_sec": 0, 00:18:13.230 "w_mbytes_per_sec": 0 00:18:13.230 }, 00:18:13.230 "claimed": false, 00:18:13.230 "zoned": false, 00:18:13.230 "supported_io_types": { 00:18:13.230 "read": true, 00:18:13.230 "write": true, 00:18:13.230 "unmap": false, 00:18:13.230 "flush": false, 00:18:13.230 "reset": true, 00:18:13.230 "nvme_admin": false, 00:18:13.230 "nvme_io": false, 00:18:13.230 "nvme_io_md": false, 00:18:13.230 "write_zeroes": true, 00:18:13.230 "zcopy": false, 00:18:13.230 "get_zone_info": false, 00:18:13.230 "zone_management": false, 00:18:13.230 "zone_append": false, 00:18:13.230 "compare": false, 00:18:13.230 "compare_and_write": false, 00:18:13.230 "abort": false, 00:18:13.230 "seek_hole": false, 00:18:13.230 "seek_data": false, 00:18:13.230 "copy": false, 00:18:13.230 "nvme_iov_md": false 00:18:13.230 }, 00:18:13.230 "driver_specific": { 00:18:13.230 "raid": { 00:18:13.230 "uuid": "d67813ac-490c-47c1-b3e3-3cfa7686b30c", 00:18:13.230 "strip_size_kb": 64, 00:18:13.230 "state": "online", 00:18:13.230 "raid_level": "raid5f", 00:18:13.230 "superblock": false, 00:18:13.230 "num_base_bdevs": 4, 00:18:13.230 "num_base_bdevs_discovered": 4, 00:18:13.230 "num_base_bdevs_operational": 4, 00:18:13.230 "base_bdevs_list": [ 00:18:13.230 { 00:18:13.230 "name": "NewBaseBdev", 00:18:13.230 "uuid": "362dd995-db71-4b14-9068-19e6a580271e", 00:18:13.230 "is_configured": true, 00:18:13.230 "data_offset": 0, 00:18:13.230 "data_size": 65536 00:18:13.230 }, 00:18:13.230 { 00:18:13.230 "name": "BaseBdev2", 00:18:13.230 "uuid": "4c9181cb-1bae-4940-b297-0e579c700d9e", 00:18:13.231 "is_configured": true, 00:18:13.231 "data_offset": 0, 00:18:13.231 "data_size": 65536 00:18:13.231 }, 00:18:13.231 { 00:18:13.231 "name": "BaseBdev3", 00:18:13.231 "uuid": "daddfe56-2294-47f0-bd84-19bd8410c9f3", 00:18:13.231 "is_configured": true, 00:18:13.231 "data_offset": 0, 00:18:13.231 "data_size": 65536 00:18:13.231 }, 00:18:13.231 { 00:18:13.231 "name": "BaseBdev4", 00:18:13.231 "uuid": "b0e3086b-0022-4ec6-8bc5-0049372416e6", 00:18:13.231 "is_configured": true, 00:18:13.231 "data_offset": 0, 00:18:13.231 "data_size": 65536 00:18:13.231 } 00:18:13.231 ] 00:18:13.231 } 00:18:13.231 } 00:18:13.231 }' 00:18:13.231 14:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:13.231 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:13.231 BaseBdev2 00:18:13.231 BaseBdev3 00:18:13.231 BaseBdev4' 00:18:13.231 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.231 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:13.231 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:13.231 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:13.231 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.231 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.231 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.231 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.231 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:13.231 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:13.231 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:13.231 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.231 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:13.231 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.231 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.231 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.231 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:13.231 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:13.231 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:13.231 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:13.231 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.231 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.231 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.490 [2024-11-20 14:28:52.307109] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:13.490 [2024-11-20 14:28:52.307149] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:13.490 [2024-11-20 14:28:52.307257] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:13.490 [2024-11-20 14:28:52.307649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:13.490 [2024-11-20 14:28:52.307668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83195 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83195 ']' 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83195 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83195 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83195' 00:18:13.490 killing process with pid 83195 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83195 00:18:13.490 [2024-11-20 14:28:52.350337] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:13.490 14:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83195 00:18:13.749 [2024-11-20 14:28:52.710548] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:15.126 00:18:15.126 real 0m13.020s 00:18:15.126 user 0m21.560s 00:18:15.126 sys 0m1.856s 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:15.126 ************************************ 00:18:15.126 END TEST raid5f_state_function_test 00:18:15.126 ************************************ 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.126 14:28:53 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:18:15.126 14:28:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:15.126 14:28:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:15.126 14:28:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:15.126 ************************************ 00:18:15.126 START TEST raid5f_state_function_test_sb 00:18:15.126 ************************************ 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:15.126 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:15.126 Process raid pid: 83871 00:18:15.127 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83871 00:18:15.127 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83871' 00:18:15.127 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83871 00:18:15.127 14:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:15.127 14:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83871 ']' 00:18:15.127 14:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.127 14:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.127 14:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.127 14:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.127 14:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.127 [2024-11-20 14:28:53.929718] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:18:15.127 [2024-11-20 14:28:53.930123] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.386 [2024-11-20 14:28:54.120541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.386 [2024-11-20 14:28:54.278067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.644 [2024-11-20 14:28:54.488386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:15.644 [2024-11-20 14:28:54.488446] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:15.903 14:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.903 14:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:15.903 14:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:15.903 14:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.903 14:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.903 [2024-11-20 14:28:54.857358] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:15.903 [2024-11-20 14:28:54.857424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:15.903 [2024-11-20 14:28:54.857453] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:15.903 [2024-11-20 14:28:54.857470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:15.903 [2024-11-20 14:28:54.857480] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:15.903 [2024-11-20 14:28:54.857495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:15.903 [2024-11-20 14:28:54.857505] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:15.903 [2024-11-20 14:28:54.857519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:15.903 14:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.903 14:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:15.903 14:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.903 14:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:15.903 14:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:15.903 14:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:15.903 14:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:15.903 14:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.903 14:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.903 14:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.903 14:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.903 14:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.903 14:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.903 14:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.903 14:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.903 14:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.162 14:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.162 "name": "Existed_Raid", 00:18:16.162 "uuid": "9ca625a9-30a5-4782-9c85-a179e81d53f9", 00:18:16.162 "strip_size_kb": 64, 00:18:16.162 "state": "configuring", 00:18:16.162 "raid_level": "raid5f", 00:18:16.162 "superblock": true, 00:18:16.162 "num_base_bdevs": 4, 00:18:16.162 "num_base_bdevs_discovered": 0, 00:18:16.162 "num_base_bdevs_operational": 4, 00:18:16.162 "base_bdevs_list": [ 00:18:16.162 { 00:18:16.162 "name": "BaseBdev1", 00:18:16.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.162 "is_configured": false, 00:18:16.162 "data_offset": 0, 00:18:16.162 "data_size": 0 00:18:16.162 }, 00:18:16.162 { 00:18:16.162 "name": "BaseBdev2", 00:18:16.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.162 "is_configured": false, 00:18:16.162 "data_offset": 0, 00:18:16.162 "data_size": 0 00:18:16.162 }, 00:18:16.162 { 00:18:16.162 "name": "BaseBdev3", 00:18:16.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.162 "is_configured": false, 00:18:16.162 "data_offset": 0, 00:18:16.162 "data_size": 0 00:18:16.162 }, 00:18:16.162 { 00:18:16.162 "name": "BaseBdev4", 00:18:16.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.162 "is_configured": false, 00:18:16.162 "data_offset": 0, 00:18:16.162 "data_size": 0 00:18:16.162 } 00:18:16.162 ] 00:18:16.162 }' 00:18:16.162 14:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.162 14:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.461 14:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:16.461 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.461 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.461 [2024-11-20 14:28:55.425488] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:16.461 [2024-11-20 14:28:55.425534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:16.461 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.461 14:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:16.461 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.461 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.461 [2024-11-20 14:28:55.437514] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:16.461 [2024-11-20 14:28:55.437730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:16.461 [2024-11-20 14:28:55.437757] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:16.461 [2024-11-20 14:28:55.437775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:16.461 [2024-11-20 14:28:55.437785] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:16.461 [2024-11-20 14:28:55.437799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:16.461 [2024-11-20 14:28:55.437808] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:16.461 [2024-11-20 14:28:55.437823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.722 [2024-11-20 14:28:55.483401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:16.722 BaseBdev1 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.722 [ 00:18:16.722 { 00:18:16.722 "name": "BaseBdev1", 00:18:16.722 "aliases": [ 00:18:16.722 "38fb6bae-3f4f-455f-95e0-ba18501a4734" 00:18:16.722 ], 00:18:16.722 "product_name": "Malloc disk", 00:18:16.722 "block_size": 512, 00:18:16.722 "num_blocks": 65536, 00:18:16.722 "uuid": "38fb6bae-3f4f-455f-95e0-ba18501a4734", 00:18:16.722 "assigned_rate_limits": { 00:18:16.722 "rw_ios_per_sec": 0, 00:18:16.722 "rw_mbytes_per_sec": 0, 00:18:16.722 "r_mbytes_per_sec": 0, 00:18:16.722 "w_mbytes_per_sec": 0 00:18:16.722 }, 00:18:16.722 "claimed": true, 00:18:16.722 "claim_type": "exclusive_write", 00:18:16.722 "zoned": false, 00:18:16.722 "supported_io_types": { 00:18:16.722 "read": true, 00:18:16.722 "write": true, 00:18:16.722 "unmap": true, 00:18:16.722 "flush": true, 00:18:16.722 "reset": true, 00:18:16.722 "nvme_admin": false, 00:18:16.722 "nvme_io": false, 00:18:16.722 "nvme_io_md": false, 00:18:16.722 "write_zeroes": true, 00:18:16.722 "zcopy": true, 00:18:16.722 "get_zone_info": false, 00:18:16.722 "zone_management": false, 00:18:16.722 "zone_append": false, 00:18:16.722 "compare": false, 00:18:16.722 "compare_and_write": false, 00:18:16.722 "abort": true, 00:18:16.722 "seek_hole": false, 00:18:16.722 "seek_data": false, 00:18:16.722 "copy": true, 00:18:16.722 "nvme_iov_md": false 00:18:16.722 }, 00:18:16.722 "memory_domains": [ 00:18:16.722 { 00:18:16.722 "dma_device_id": "system", 00:18:16.722 "dma_device_type": 1 00:18:16.722 }, 00:18:16.722 { 00:18:16.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.722 "dma_device_type": 2 00:18:16.722 } 00:18:16.722 ], 00:18:16.722 "driver_specific": {} 00:18:16.722 } 00:18:16.722 ] 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.722 "name": "Existed_Raid", 00:18:16.722 "uuid": "cea9a7d4-1d35-44e6-a688-44ff29841862", 00:18:16.722 "strip_size_kb": 64, 00:18:16.722 "state": "configuring", 00:18:16.722 "raid_level": "raid5f", 00:18:16.722 "superblock": true, 00:18:16.722 "num_base_bdevs": 4, 00:18:16.722 "num_base_bdevs_discovered": 1, 00:18:16.722 "num_base_bdevs_operational": 4, 00:18:16.722 "base_bdevs_list": [ 00:18:16.722 { 00:18:16.722 "name": "BaseBdev1", 00:18:16.722 "uuid": "38fb6bae-3f4f-455f-95e0-ba18501a4734", 00:18:16.722 "is_configured": true, 00:18:16.722 "data_offset": 2048, 00:18:16.722 "data_size": 63488 00:18:16.722 }, 00:18:16.722 { 00:18:16.722 "name": "BaseBdev2", 00:18:16.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.722 "is_configured": false, 00:18:16.722 "data_offset": 0, 00:18:16.722 "data_size": 0 00:18:16.722 }, 00:18:16.722 { 00:18:16.722 "name": "BaseBdev3", 00:18:16.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.722 "is_configured": false, 00:18:16.722 "data_offset": 0, 00:18:16.722 "data_size": 0 00:18:16.722 }, 00:18:16.722 { 00:18:16.722 "name": "BaseBdev4", 00:18:16.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.722 "is_configured": false, 00:18:16.722 "data_offset": 0, 00:18:16.722 "data_size": 0 00:18:16.722 } 00:18:16.722 ] 00:18:16.722 }' 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.722 14:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.291 [2024-11-20 14:28:56.031594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:17.291 [2024-11-20 14:28:56.031659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.291 [2024-11-20 14:28:56.039666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:17.291 [2024-11-20 14:28:56.042080] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:17.291 [2024-11-20 14:28:56.042136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:17.291 [2024-11-20 14:28:56.042153] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:17.291 [2024-11-20 14:28:56.042171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:17.291 [2024-11-20 14:28:56.042181] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:17.291 [2024-11-20 14:28:56.042196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.291 "name": "Existed_Raid", 00:18:17.291 "uuid": "1c2befb0-cd46-422b-88b1-dd52d54ca607", 00:18:17.291 "strip_size_kb": 64, 00:18:17.291 "state": "configuring", 00:18:17.291 "raid_level": "raid5f", 00:18:17.291 "superblock": true, 00:18:17.291 "num_base_bdevs": 4, 00:18:17.291 "num_base_bdevs_discovered": 1, 00:18:17.291 "num_base_bdevs_operational": 4, 00:18:17.291 "base_bdevs_list": [ 00:18:17.291 { 00:18:17.291 "name": "BaseBdev1", 00:18:17.291 "uuid": "38fb6bae-3f4f-455f-95e0-ba18501a4734", 00:18:17.291 "is_configured": true, 00:18:17.291 "data_offset": 2048, 00:18:17.291 "data_size": 63488 00:18:17.291 }, 00:18:17.291 { 00:18:17.291 "name": "BaseBdev2", 00:18:17.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.291 "is_configured": false, 00:18:17.291 "data_offset": 0, 00:18:17.291 "data_size": 0 00:18:17.291 }, 00:18:17.291 { 00:18:17.291 "name": "BaseBdev3", 00:18:17.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.291 "is_configured": false, 00:18:17.291 "data_offset": 0, 00:18:17.291 "data_size": 0 00:18:17.291 }, 00:18:17.291 { 00:18:17.291 "name": "BaseBdev4", 00:18:17.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.291 "is_configured": false, 00:18:17.291 "data_offset": 0, 00:18:17.291 "data_size": 0 00:18:17.291 } 00:18:17.291 ] 00:18:17.291 }' 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.291 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.859 [2024-11-20 14:28:56.598232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:17.859 BaseBdev2 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.859 [ 00:18:17.859 { 00:18:17.859 "name": "BaseBdev2", 00:18:17.859 "aliases": [ 00:18:17.859 "c14036ce-1725-4d7f-9e86-80239182c9e0" 00:18:17.859 ], 00:18:17.859 "product_name": "Malloc disk", 00:18:17.859 "block_size": 512, 00:18:17.859 "num_blocks": 65536, 00:18:17.859 "uuid": "c14036ce-1725-4d7f-9e86-80239182c9e0", 00:18:17.859 "assigned_rate_limits": { 00:18:17.859 "rw_ios_per_sec": 0, 00:18:17.859 "rw_mbytes_per_sec": 0, 00:18:17.859 "r_mbytes_per_sec": 0, 00:18:17.859 "w_mbytes_per_sec": 0 00:18:17.859 }, 00:18:17.859 "claimed": true, 00:18:17.859 "claim_type": "exclusive_write", 00:18:17.859 "zoned": false, 00:18:17.859 "supported_io_types": { 00:18:17.859 "read": true, 00:18:17.859 "write": true, 00:18:17.859 "unmap": true, 00:18:17.859 "flush": true, 00:18:17.859 "reset": true, 00:18:17.859 "nvme_admin": false, 00:18:17.859 "nvme_io": false, 00:18:17.859 "nvme_io_md": false, 00:18:17.859 "write_zeroes": true, 00:18:17.859 "zcopy": true, 00:18:17.859 "get_zone_info": false, 00:18:17.859 "zone_management": false, 00:18:17.859 "zone_append": false, 00:18:17.859 "compare": false, 00:18:17.859 "compare_and_write": false, 00:18:17.859 "abort": true, 00:18:17.859 "seek_hole": false, 00:18:17.859 "seek_data": false, 00:18:17.859 "copy": true, 00:18:17.859 "nvme_iov_md": false 00:18:17.859 }, 00:18:17.859 "memory_domains": [ 00:18:17.859 { 00:18:17.859 "dma_device_id": "system", 00:18:17.859 "dma_device_type": 1 00:18:17.859 }, 00:18:17.859 { 00:18:17.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.859 "dma_device_type": 2 00:18:17.859 } 00:18:17.859 ], 00:18:17.859 "driver_specific": {} 00:18:17.859 } 00:18:17.859 ] 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.859 "name": "Existed_Raid", 00:18:17.859 "uuid": "1c2befb0-cd46-422b-88b1-dd52d54ca607", 00:18:17.859 "strip_size_kb": 64, 00:18:17.859 "state": "configuring", 00:18:17.859 "raid_level": "raid5f", 00:18:17.859 "superblock": true, 00:18:17.859 "num_base_bdevs": 4, 00:18:17.859 "num_base_bdevs_discovered": 2, 00:18:17.859 "num_base_bdevs_operational": 4, 00:18:17.859 "base_bdevs_list": [ 00:18:17.859 { 00:18:17.859 "name": "BaseBdev1", 00:18:17.859 "uuid": "38fb6bae-3f4f-455f-95e0-ba18501a4734", 00:18:17.859 "is_configured": true, 00:18:17.859 "data_offset": 2048, 00:18:17.859 "data_size": 63488 00:18:17.859 }, 00:18:17.859 { 00:18:17.859 "name": "BaseBdev2", 00:18:17.859 "uuid": "c14036ce-1725-4d7f-9e86-80239182c9e0", 00:18:17.859 "is_configured": true, 00:18:17.859 "data_offset": 2048, 00:18:17.859 "data_size": 63488 00:18:17.859 }, 00:18:17.859 { 00:18:17.859 "name": "BaseBdev3", 00:18:17.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.859 "is_configured": false, 00:18:17.859 "data_offset": 0, 00:18:17.859 "data_size": 0 00:18:17.859 }, 00:18:17.859 { 00:18:17.859 "name": "BaseBdev4", 00:18:17.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.859 "is_configured": false, 00:18:17.859 "data_offset": 0, 00:18:17.859 "data_size": 0 00:18:17.859 } 00:18:17.859 ] 00:18:17.859 }' 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.859 14:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.426 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:18.426 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.426 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.426 [2024-11-20 14:28:57.222956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:18.426 BaseBdev3 00:18:18.426 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.426 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:18.426 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:18.426 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:18.426 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:18.426 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:18.426 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:18.426 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:18.426 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.426 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.426 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.426 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:18.426 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.426 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.426 [ 00:18:18.426 { 00:18:18.426 "name": "BaseBdev3", 00:18:18.426 "aliases": [ 00:18:18.426 "26c8d769-505b-467d-92fc-ba2d65d535fe" 00:18:18.426 ], 00:18:18.426 "product_name": "Malloc disk", 00:18:18.426 "block_size": 512, 00:18:18.426 "num_blocks": 65536, 00:18:18.426 "uuid": "26c8d769-505b-467d-92fc-ba2d65d535fe", 00:18:18.426 "assigned_rate_limits": { 00:18:18.426 "rw_ios_per_sec": 0, 00:18:18.426 "rw_mbytes_per_sec": 0, 00:18:18.426 "r_mbytes_per_sec": 0, 00:18:18.426 "w_mbytes_per_sec": 0 00:18:18.426 }, 00:18:18.426 "claimed": true, 00:18:18.426 "claim_type": "exclusive_write", 00:18:18.426 "zoned": false, 00:18:18.426 "supported_io_types": { 00:18:18.426 "read": true, 00:18:18.426 "write": true, 00:18:18.427 "unmap": true, 00:18:18.427 "flush": true, 00:18:18.427 "reset": true, 00:18:18.427 "nvme_admin": false, 00:18:18.427 "nvme_io": false, 00:18:18.427 "nvme_io_md": false, 00:18:18.427 "write_zeroes": true, 00:18:18.427 "zcopy": true, 00:18:18.427 "get_zone_info": false, 00:18:18.427 "zone_management": false, 00:18:18.427 "zone_append": false, 00:18:18.427 "compare": false, 00:18:18.427 "compare_and_write": false, 00:18:18.427 "abort": true, 00:18:18.427 "seek_hole": false, 00:18:18.427 "seek_data": false, 00:18:18.427 "copy": true, 00:18:18.427 "nvme_iov_md": false 00:18:18.427 }, 00:18:18.427 "memory_domains": [ 00:18:18.427 { 00:18:18.427 "dma_device_id": "system", 00:18:18.427 "dma_device_type": 1 00:18:18.427 }, 00:18:18.427 { 00:18:18.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.427 "dma_device_type": 2 00:18:18.427 } 00:18:18.427 ], 00:18:18.427 "driver_specific": {} 00:18:18.427 } 00:18:18.427 ] 00:18:18.427 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.427 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:18.427 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:18.427 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:18.427 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:18.427 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:18.427 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:18.427 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:18.427 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:18.427 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:18.427 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.427 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.427 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.427 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.427 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.427 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.427 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.427 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.427 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.427 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.427 "name": "Existed_Raid", 00:18:18.427 "uuid": "1c2befb0-cd46-422b-88b1-dd52d54ca607", 00:18:18.427 "strip_size_kb": 64, 00:18:18.427 "state": "configuring", 00:18:18.427 "raid_level": "raid5f", 00:18:18.427 "superblock": true, 00:18:18.427 "num_base_bdevs": 4, 00:18:18.427 "num_base_bdevs_discovered": 3, 00:18:18.427 "num_base_bdevs_operational": 4, 00:18:18.427 "base_bdevs_list": [ 00:18:18.427 { 00:18:18.427 "name": "BaseBdev1", 00:18:18.427 "uuid": "38fb6bae-3f4f-455f-95e0-ba18501a4734", 00:18:18.427 "is_configured": true, 00:18:18.427 "data_offset": 2048, 00:18:18.427 "data_size": 63488 00:18:18.427 }, 00:18:18.427 { 00:18:18.427 "name": "BaseBdev2", 00:18:18.427 "uuid": "c14036ce-1725-4d7f-9e86-80239182c9e0", 00:18:18.427 "is_configured": true, 00:18:18.427 "data_offset": 2048, 00:18:18.427 "data_size": 63488 00:18:18.427 }, 00:18:18.427 { 00:18:18.427 "name": "BaseBdev3", 00:18:18.427 "uuid": "26c8d769-505b-467d-92fc-ba2d65d535fe", 00:18:18.427 "is_configured": true, 00:18:18.427 "data_offset": 2048, 00:18:18.427 "data_size": 63488 00:18:18.427 }, 00:18:18.427 { 00:18:18.427 "name": "BaseBdev4", 00:18:18.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.427 "is_configured": false, 00:18:18.427 "data_offset": 0, 00:18:18.427 "data_size": 0 00:18:18.427 } 00:18:18.427 ] 00:18:18.427 }' 00:18:18.427 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.427 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.995 [2024-11-20 14:28:57.790126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:18.995 [2024-11-20 14:28:57.790477] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:18.995 [2024-11-20 14:28:57.790498] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:18.995 BaseBdev4 00:18:18.995 [2024-11-20 14:28:57.790831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.995 [2024-11-20 14:28:57.797697] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:18.995 [2024-11-20 14:28:57.797726] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:18.995 [2024-11-20 14:28:57.798074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.995 [ 00:18:18.995 { 00:18:18.995 "name": "BaseBdev4", 00:18:18.995 "aliases": [ 00:18:18.995 "780d11cf-36d8-4908-bef7-0f89341e5afb" 00:18:18.995 ], 00:18:18.995 "product_name": "Malloc disk", 00:18:18.995 "block_size": 512, 00:18:18.995 "num_blocks": 65536, 00:18:18.995 "uuid": "780d11cf-36d8-4908-bef7-0f89341e5afb", 00:18:18.995 "assigned_rate_limits": { 00:18:18.995 "rw_ios_per_sec": 0, 00:18:18.995 "rw_mbytes_per_sec": 0, 00:18:18.995 "r_mbytes_per_sec": 0, 00:18:18.995 "w_mbytes_per_sec": 0 00:18:18.995 }, 00:18:18.995 "claimed": true, 00:18:18.995 "claim_type": "exclusive_write", 00:18:18.995 "zoned": false, 00:18:18.995 "supported_io_types": { 00:18:18.995 "read": true, 00:18:18.995 "write": true, 00:18:18.995 "unmap": true, 00:18:18.995 "flush": true, 00:18:18.995 "reset": true, 00:18:18.995 "nvme_admin": false, 00:18:18.995 "nvme_io": false, 00:18:18.995 "nvme_io_md": false, 00:18:18.995 "write_zeroes": true, 00:18:18.995 "zcopy": true, 00:18:18.995 "get_zone_info": false, 00:18:18.995 "zone_management": false, 00:18:18.995 "zone_append": false, 00:18:18.995 "compare": false, 00:18:18.995 "compare_and_write": false, 00:18:18.995 "abort": true, 00:18:18.995 "seek_hole": false, 00:18:18.995 "seek_data": false, 00:18:18.995 "copy": true, 00:18:18.995 "nvme_iov_md": false 00:18:18.995 }, 00:18:18.995 "memory_domains": [ 00:18:18.995 { 00:18:18.995 "dma_device_id": "system", 00:18:18.995 "dma_device_type": 1 00:18:18.995 }, 00:18:18.995 { 00:18:18.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.995 "dma_device_type": 2 00:18:18.995 } 00:18:18.995 ], 00:18:18.995 "driver_specific": {} 00:18:18.995 } 00:18:18.995 ] 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.995 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.995 "name": "Existed_Raid", 00:18:18.995 "uuid": "1c2befb0-cd46-422b-88b1-dd52d54ca607", 00:18:18.995 "strip_size_kb": 64, 00:18:18.995 "state": "online", 00:18:18.995 "raid_level": "raid5f", 00:18:18.995 "superblock": true, 00:18:18.995 "num_base_bdevs": 4, 00:18:18.995 "num_base_bdevs_discovered": 4, 00:18:18.995 "num_base_bdevs_operational": 4, 00:18:18.995 "base_bdevs_list": [ 00:18:18.995 { 00:18:18.995 "name": "BaseBdev1", 00:18:18.995 "uuid": "38fb6bae-3f4f-455f-95e0-ba18501a4734", 00:18:18.995 "is_configured": true, 00:18:18.995 "data_offset": 2048, 00:18:18.995 "data_size": 63488 00:18:18.995 }, 00:18:18.995 { 00:18:18.996 "name": "BaseBdev2", 00:18:18.996 "uuid": "c14036ce-1725-4d7f-9e86-80239182c9e0", 00:18:18.996 "is_configured": true, 00:18:18.996 "data_offset": 2048, 00:18:18.996 "data_size": 63488 00:18:18.996 }, 00:18:18.996 { 00:18:18.996 "name": "BaseBdev3", 00:18:18.996 "uuid": "26c8d769-505b-467d-92fc-ba2d65d535fe", 00:18:18.996 "is_configured": true, 00:18:18.996 "data_offset": 2048, 00:18:18.996 "data_size": 63488 00:18:18.996 }, 00:18:18.996 { 00:18:18.996 "name": "BaseBdev4", 00:18:18.996 "uuid": "780d11cf-36d8-4908-bef7-0f89341e5afb", 00:18:18.996 "is_configured": true, 00:18:18.996 "data_offset": 2048, 00:18:18.996 "data_size": 63488 00:18:18.996 } 00:18:18.996 ] 00:18:18.996 }' 00:18:18.996 14:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.996 14:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.564 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:19.564 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:19.564 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:19.564 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:19.564 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:19.564 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:19.564 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:19.564 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:19.564 14:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.564 14:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.564 [2024-11-20 14:28:58.381819] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.564 14:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.564 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:19.564 "name": "Existed_Raid", 00:18:19.564 "aliases": [ 00:18:19.564 "1c2befb0-cd46-422b-88b1-dd52d54ca607" 00:18:19.564 ], 00:18:19.564 "product_name": "Raid Volume", 00:18:19.564 "block_size": 512, 00:18:19.564 "num_blocks": 190464, 00:18:19.564 "uuid": "1c2befb0-cd46-422b-88b1-dd52d54ca607", 00:18:19.564 "assigned_rate_limits": { 00:18:19.564 "rw_ios_per_sec": 0, 00:18:19.564 "rw_mbytes_per_sec": 0, 00:18:19.564 "r_mbytes_per_sec": 0, 00:18:19.564 "w_mbytes_per_sec": 0 00:18:19.564 }, 00:18:19.564 "claimed": false, 00:18:19.564 "zoned": false, 00:18:19.564 "supported_io_types": { 00:18:19.564 "read": true, 00:18:19.564 "write": true, 00:18:19.564 "unmap": false, 00:18:19.564 "flush": false, 00:18:19.564 "reset": true, 00:18:19.564 "nvme_admin": false, 00:18:19.564 "nvme_io": false, 00:18:19.564 "nvme_io_md": false, 00:18:19.564 "write_zeroes": true, 00:18:19.564 "zcopy": false, 00:18:19.564 "get_zone_info": false, 00:18:19.564 "zone_management": false, 00:18:19.564 "zone_append": false, 00:18:19.564 "compare": false, 00:18:19.564 "compare_and_write": false, 00:18:19.564 "abort": false, 00:18:19.564 "seek_hole": false, 00:18:19.564 "seek_data": false, 00:18:19.564 "copy": false, 00:18:19.564 "nvme_iov_md": false 00:18:19.564 }, 00:18:19.564 "driver_specific": { 00:18:19.564 "raid": { 00:18:19.564 "uuid": "1c2befb0-cd46-422b-88b1-dd52d54ca607", 00:18:19.564 "strip_size_kb": 64, 00:18:19.564 "state": "online", 00:18:19.564 "raid_level": "raid5f", 00:18:19.564 "superblock": true, 00:18:19.564 "num_base_bdevs": 4, 00:18:19.564 "num_base_bdevs_discovered": 4, 00:18:19.564 "num_base_bdevs_operational": 4, 00:18:19.564 "base_bdevs_list": [ 00:18:19.564 { 00:18:19.564 "name": "BaseBdev1", 00:18:19.564 "uuid": "38fb6bae-3f4f-455f-95e0-ba18501a4734", 00:18:19.564 "is_configured": true, 00:18:19.564 "data_offset": 2048, 00:18:19.564 "data_size": 63488 00:18:19.564 }, 00:18:19.564 { 00:18:19.564 "name": "BaseBdev2", 00:18:19.564 "uuid": "c14036ce-1725-4d7f-9e86-80239182c9e0", 00:18:19.564 "is_configured": true, 00:18:19.564 "data_offset": 2048, 00:18:19.564 "data_size": 63488 00:18:19.564 }, 00:18:19.564 { 00:18:19.564 "name": "BaseBdev3", 00:18:19.564 "uuid": "26c8d769-505b-467d-92fc-ba2d65d535fe", 00:18:19.564 "is_configured": true, 00:18:19.564 "data_offset": 2048, 00:18:19.564 "data_size": 63488 00:18:19.564 }, 00:18:19.564 { 00:18:19.564 "name": "BaseBdev4", 00:18:19.564 "uuid": "780d11cf-36d8-4908-bef7-0f89341e5afb", 00:18:19.564 "is_configured": true, 00:18:19.564 "data_offset": 2048, 00:18:19.564 "data_size": 63488 00:18:19.564 } 00:18:19.564 ] 00:18:19.564 } 00:18:19.564 } 00:18:19.564 }' 00:18:19.564 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:19.564 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:19.564 BaseBdev2 00:18:19.564 BaseBdev3 00:18:19.564 BaseBdev4' 00:18:19.564 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.564 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:19.564 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.823 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:19.823 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.823 14:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.823 14:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.823 14:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.823 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:19.823 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:19.823 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.823 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:19.823 14:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.823 14:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.823 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.823 14:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.823 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:19.823 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:19.823 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.823 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:19.824 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.824 14:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.824 14:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.824 14:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.824 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:19.824 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:19.824 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.824 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:19.824 14:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.824 14:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.824 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.824 14:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.824 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:19.824 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:19.824 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:19.824 14:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.824 14:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.824 [2024-11-20 14:28:58.757785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:20.083 14:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.083 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:20.083 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:20.083 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:20.083 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:18:20.083 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:20.083 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:20.083 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:20.083 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.083 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:20.083 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:20.083 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:20.083 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.083 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.083 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.083 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.083 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.083 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.083 14:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.083 14:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.083 14:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.083 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.083 "name": "Existed_Raid", 00:18:20.083 "uuid": "1c2befb0-cd46-422b-88b1-dd52d54ca607", 00:18:20.083 "strip_size_kb": 64, 00:18:20.083 "state": "online", 00:18:20.083 "raid_level": "raid5f", 00:18:20.083 "superblock": true, 00:18:20.083 "num_base_bdevs": 4, 00:18:20.083 "num_base_bdevs_discovered": 3, 00:18:20.083 "num_base_bdevs_operational": 3, 00:18:20.083 "base_bdevs_list": [ 00:18:20.083 { 00:18:20.083 "name": null, 00:18:20.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.083 "is_configured": false, 00:18:20.083 "data_offset": 0, 00:18:20.083 "data_size": 63488 00:18:20.083 }, 00:18:20.083 { 00:18:20.083 "name": "BaseBdev2", 00:18:20.083 "uuid": "c14036ce-1725-4d7f-9e86-80239182c9e0", 00:18:20.083 "is_configured": true, 00:18:20.083 "data_offset": 2048, 00:18:20.083 "data_size": 63488 00:18:20.084 }, 00:18:20.084 { 00:18:20.084 "name": "BaseBdev3", 00:18:20.084 "uuid": "26c8d769-505b-467d-92fc-ba2d65d535fe", 00:18:20.084 "is_configured": true, 00:18:20.084 "data_offset": 2048, 00:18:20.084 "data_size": 63488 00:18:20.084 }, 00:18:20.084 { 00:18:20.084 "name": "BaseBdev4", 00:18:20.084 "uuid": "780d11cf-36d8-4908-bef7-0f89341e5afb", 00:18:20.084 "is_configured": true, 00:18:20.084 "data_offset": 2048, 00:18:20.084 "data_size": 63488 00:18:20.084 } 00:18:20.084 ] 00:18:20.084 }' 00:18:20.084 14:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.084 14:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.652 [2024-11-20 14:28:59.449116] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:20.652 [2024-11-20 14:28:59.449312] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:20.652 [2024-11-20 14:28:59.532584] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.652 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.652 [2024-11-20 14:28:59.592627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.912 [2024-11-20 14:28:59.739661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:20.912 [2024-11-20 14:28:59.739881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.912 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.172 BaseBdev2 00:18:21.172 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.172 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:21.172 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:21.172 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:21.172 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:21.172 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:21.172 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:21.172 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:21.172 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.172 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.172 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.172 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:21.172 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.172 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.172 [ 00:18:21.172 { 00:18:21.172 "name": "BaseBdev2", 00:18:21.172 "aliases": [ 00:18:21.172 "c88b301b-0cf3-410b-8ad8-c41857ee4c90" 00:18:21.172 ], 00:18:21.172 "product_name": "Malloc disk", 00:18:21.172 "block_size": 512, 00:18:21.172 "num_blocks": 65536, 00:18:21.172 "uuid": "c88b301b-0cf3-410b-8ad8-c41857ee4c90", 00:18:21.172 "assigned_rate_limits": { 00:18:21.172 "rw_ios_per_sec": 0, 00:18:21.172 "rw_mbytes_per_sec": 0, 00:18:21.172 "r_mbytes_per_sec": 0, 00:18:21.172 "w_mbytes_per_sec": 0 00:18:21.172 }, 00:18:21.172 "claimed": false, 00:18:21.172 "zoned": false, 00:18:21.172 "supported_io_types": { 00:18:21.172 "read": true, 00:18:21.172 "write": true, 00:18:21.172 "unmap": true, 00:18:21.172 "flush": true, 00:18:21.172 "reset": true, 00:18:21.172 "nvme_admin": false, 00:18:21.172 "nvme_io": false, 00:18:21.172 "nvme_io_md": false, 00:18:21.172 "write_zeroes": true, 00:18:21.172 "zcopy": true, 00:18:21.172 "get_zone_info": false, 00:18:21.172 "zone_management": false, 00:18:21.172 "zone_append": false, 00:18:21.172 "compare": false, 00:18:21.172 "compare_and_write": false, 00:18:21.172 "abort": true, 00:18:21.172 "seek_hole": false, 00:18:21.172 "seek_data": false, 00:18:21.172 "copy": true, 00:18:21.172 "nvme_iov_md": false 00:18:21.172 }, 00:18:21.172 "memory_domains": [ 00:18:21.172 { 00:18:21.172 "dma_device_id": "system", 00:18:21.172 "dma_device_type": 1 00:18:21.172 }, 00:18:21.172 { 00:18:21.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.172 "dma_device_type": 2 00:18:21.172 } 00:18:21.172 ], 00:18:21.172 "driver_specific": {} 00:18:21.172 } 00:18:21.172 ] 00:18:21.172 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.172 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:21.172 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:21.172 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:21.172 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:21.172 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.172 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.172 BaseBdev3 00:18:21.172 14:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.172 14:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:21.172 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:21.172 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:21.172 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:21.172 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:21.172 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:21.172 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:21.172 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.172 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.172 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.172 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:21.172 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.172 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.172 [ 00:18:21.172 { 00:18:21.172 "name": "BaseBdev3", 00:18:21.172 "aliases": [ 00:18:21.172 "9121344d-0896-4037-97b7-f797ef655054" 00:18:21.172 ], 00:18:21.172 "product_name": "Malloc disk", 00:18:21.172 "block_size": 512, 00:18:21.172 "num_blocks": 65536, 00:18:21.172 "uuid": "9121344d-0896-4037-97b7-f797ef655054", 00:18:21.172 "assigned_rate_limits": { 00:18:21.172 "rw_ios_per_sec": 0, 00:18:21.172 "rw_mbytes_per_sec": 0, 00:18:21.172 "r_mbytes_per_sec": 0, 00:18:21.172 "w_mbytes_per_sec": 0 00:18:21.172 }, 00:18:21.172 "claimed": false, 00:18:21.172 "zoned": false, 00:18:21.172 "supported_io_types": { 00:18:21.172 "read": true, 00:18:21.172 "write": true, 00:18:21.172 "unmap": true, 00:18:21.172 "flush": true, 00:18:21.172 "reset": true, 00:18:21.172 "nvme_admin": false, 00:18:21.172 "nvme_io": false, 00:18:21.172 "nvme_io_md": false, 00:18:21.172 "write_zeroes": true, 00:18:21.172 "zcopy": true, 00:18:21.172 "get_zone_info": false, 00:18:21.172 "zone_management": false, 00:18:21.172 "zone_append": false, 00:18:21.172 "compare": false, 00:18:21.172 "compare_and_write": false, 00:18:21.172 "abort": true, 00:18:21.172 "seek_hole": false, 00:18:21.172 "seek_data": false, 00:18:21.172 "copy": true, 00:18:21.172 "nvme_iov_md": false 00:18:21.172 }, 00:18:21.172 "memory_domains": [ 00:18:21.172 { 00:18:21.172 "dma_device_id": "system", 00:18:21.172 "dma_device_type": 1 00:18:21.172 }, 00:18:21.172 { 00:18:21.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.172 "dma_device_type": 2 00:18:21.172 } 00:18:21.172 ], 00:18:21.172 "driver_specific": {} 00:18:21.172 } 00:18:21.172 ] 00:18:21.172 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.172 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:21.172 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:21.172 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:21.172 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:21.172 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.172 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.172 BaseBdev4 00:18:21.172 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.172 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:21.172 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:21.172 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.173 [ 00:18:21.173 { 00:18:21.173 "name": "BaseBdev4", 00:18:21.173 "aliases": [ 00:18:21.173 "0fe47918-d7ef-40c0-9b7c-dd3d28a1f776" 00:18:21.173 ], 00:18:21.173 "product_name": "Malloc disk", 00:18:21.173 "block_size": 512, 00:18:21.173 "num_blocks": 65536, 00:18:21.173 "uuid": "0fe47918-d7ef-40c0-9b7c-dd3d28a1f776", 00:18:21.173 "assigned_rate_limits": { 00:18:21.173 "rw_ios_per_sec": 0, 00:18:21.173 "rw_mbytes_per_sec": 0, 00:18:21.173 "r_mbytes_per_sec": 0, 00:18:21.173 "w_mbytes_per_sec": 0 00:18:21.173 }, 00:18:21.173 "claimed": false, 00:18:21.173 "zoned": false, 00:18:21.173 "supported_io_types": { 00:18:21.173 "read": true, 00:18:21.173 "write": true, 00:18:21.173 "unmap": true, 00:18:21.173 "flush": true, 00:18:21.173 "reset": true, 00:18:21.173 "nvme_admin": false, 00:18:21.173 "nvme_io": false, 00:18:21.173 "nvme_io_md": false, 00:18:21.173 "write_zeroes": true, 00:18:21.173 "zcopy": true, 00:18:21.173 "get_zone_info": false, 00:18:21.173 "zone_management": false, 00:18:21.173 "zone_append": false, 00:18:21.173 "compare": false, 00:18:21.173 "compare_and_write": false, 00:18:21.173 "abort": true, 00:18:21.173 "seek_hole": false, 00:18:21.173 "seek_data": false, 00:18:21.173 "copy": true, 00:18:21.173 "nvme_iov_md": false 00:18:21.173 }, 00:18:21.173 "memory_domains": [ 00:18:21.173 { 00:18:21.173 "dma_device_id": "system", 00:18:21.173 "dma_device_type": 1 00:18:21.173 }, 00:18:21.173 { 00:18:21.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.173 "dma_device_type": 2 00:18:21.173 } 00:18:21.173 ], 00:18:21.173 "driver_specific": {} 00:18:21.173 } 00:18:21.173 ] 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.173 [2024-11-20 14:29:00.106624] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:21.173 [2024-11-20 14:29:00.106698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:21.173 [2024-11-20 14:29:00.106734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:21.173 [2024-11-20 14:29:00.109267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:21.173 [2024-11-20 14:29:00.109368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.173 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.463 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.463 "name": "Existed_Raid", 00:18:21.463 "uuid": "e5073415-c826-41c0-bdad-b19bc585db05", 00:18:21.463 "strip_size_kb": 64, 00:18:21.463 "state": "configuring", 00:18:21.463 "raid_level": "raid5f", 00:18:21.463 "superblock": true, 00:18:21.463 "num_base_bdevs": 4, 00:18:21.463 "num_base_bdevs_discovered": 3, 00:18:21.463 "num_base_bdevs_operational": 4, 00:18:21.463 "base_bdevs_list": [ 00:18:21.463 { 00:18:21.463 "name": "BaseBdev1", 00:18:21.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.463 "is_configured": false, 00:18:21.463 "data_offset": 0, 00:18:21.463 "data_size": 0 00:18:21.463 }, 00:18:21.463 { 00:18:21.463 "name": "BaseBdev2", 00:18:21.463 "uuid": "c88b301b-0cf3-410b-8ad8-c41857ee4c90", 00:18:21.463 "is_configured": true, 00:18:21.463 "data_offset": 2048, 00:18:21.463 "data_size": 63488 00:18:21.463 }, 00:18:21.463 { 00:18:21.463 "name": "BaseBdev3", 00:18:21.463 "uuid": "9121344d-0896-4037-97b7-f797ef655054", 00:18:21.463 "is_configured": true, 00:18:21.463 "data_offset": 2048, 00:18:21.463 "data_size": 63488 00:18:21.463 }, 00:18:21.463 { 00:18:21.463 "name": "BaseBdev4", 00:18:21.463 "uuid": "0fe47918-d7ef-40c0-9b7c-dd3d28a1f776", 00:18:21.463 "is_configured": true, 00:18:21.463 "data_offset": 2048, 00:18:21.463 "data_size": 63488 00:18:21.463 } 00:18:21.463 ] 00:18:21.463 }' 00:18:21.463 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.463 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.734 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:21.734 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.734 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.734 [2024-11-20 14:29:00.634746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:21.734 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.734 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:21.734 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.734 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:21.734 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:21.734 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:21.734 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:21.734 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.734 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.734 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.734 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.734 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.734 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.734 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.734 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.734 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.734 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.734 "name": "Existed_Raid", 00:18:21.734 "uuid": "e5073415-c826-41c0-bdad-b19bc585db05", 00:18:21.734 "strip_size_kb": 64, 00:18:21.734 "state": "configuring", 00:18:21.734 "raid_level": "raid5f", 00:18:21.734 "superblock": true, 00:18:21.734 "num_base_bdevs": 4, 00:18:21.734 "num_base_bdevs_discovered": 2, 00:18:21.734 "num_base_bdevs_operational": 4, 00:18:21.734 "base_bdevs_list": [ 00:18:21.734 { 00:18:21.734 "name": "BaseBdev1", 00:18:21.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.734 "is_configured": false, 00:18:21.734 "data_offset": 0, 00:18:21.734 "data_size": 0 00:18:21.734 }, 00:18:21.734 { 00:18:21.734 "name": null, 00:18:21.734 "uuid": "c88b301b-0cf3-410b-8ad8-c41857ee4c90", 00:18:21.734 "is_configured": false, 00:18:21.734 "data_offset": 0, 00:18:21.734 "data_size": 63488 00:18:21.734 }, 00:18:21.734 { 00:18:21.734 "name": "BaseBdev3", 00:18:21.734 "uuid": "9121344d-0896-4037-97b7-f797ef655054", 00:18:21.734 "is_configured": true, 00:18:21.734 "data_offset": 2048, 00:18:21.734 "data_size": 63488 00:18:21.734 }, 00:18:21.734 { 00:18:21.734 "name": "BaseBdev4", 00:18:21.734 "uuid": "0fe47918-d7ef-40c0-9b7c-dd3d28a1f776", 00:18:21.734 "is_configured": true, 00:18:21.734 "data_offset": 2048, 00:18:21.734 "data_size": 63488 00:18:21.734 } 00:18:21.734 ] 00:18:21.734 }' 00:18:21.734 14:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.734 14:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.301 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.301 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.301 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.301 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:22.301 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.301 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:22.301 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:22.301 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.301 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.559 [2024-11-20 14:29:01.288404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:22.559 BaseBdev1 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.559 [ 00:18:22.559 { 00:18:22.559 "name": "BaseBdev1", 00:18:22.559 "aliases": [ 00:18:22.559 "8ec59d52-6e70-4662-87ba-19171745a180" 00:18:22.559 ], 00:18:22.559 "product_name": "Malloc disk", 00:18:22.559 "block_size": 512, 00:18:22.559 "num_blocks": 65536, 00:18:22.559 "uuid": "8ec59d52-6e70-4662-87ba-19171745a180", 00:18:22.559 "assigned_rate_limits": { 00:18:22.559 "rw_ios_per_sec": 0, 00:18:22.559 "rw_mbytes_per_sec": 0, 00:18:22.559 "r_mbytes_per_sec": 0, 00:18:22.559 "w_mbytes_per_sec": 0 00:18:22.559 }, 00:18:22.559 "claimed": true, 00:18:22.559 "claim_type": "exclusive_write", 00:18:22.559 "zoned": false, 00:18:22.559 "supported_io_types": { 00:18:22.559 "read": true, 00:18:22.559 "write": true, 00:18:22.559 "unmap": true, 00:18:22.559 "flush": true, 00:18:22.559 "reset": true, 00:18:22.559 "nvme_admin": false, 00:18:22.559 "nvme_io": false, 00:18:22.559 "nvme_io_md": false, 00:18:22.559 "write_zeroes": true, 00:18:22.559 "zcopy": true, 00:18:22.559 "get_zone_info": false, 00:18:22.559 "zone_management": false, 00:18:22.559 "zone_append": false, 00:18:22.559 "compare": false, 00:18:22.559 "compare_and_write": false, 00:18:22.559 "abort": true, 00:18:22.559 "seek_hole": false, 00:18:22.559 "seek_data": false, 00:18:22.559 "copy": true, 00:18:22.559 "nvme_iov_md": false 00:18:22.559 }, 00:18:22.559 "memory_domains": [ 00:18:22.559 { 00:18:22.559 "dma_device_id": "system", 00:18:22.559 "dma_device_type": 1 00:18:22.559 }, 00:18:22.559 { 00:18:22.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.559 "dma_device_type": 2 00:18:22.559 } 00:18:22.559 ], 00:18:22.559 "driver_specific": {} 00:18:22.559 } 00:18:22.559 ] 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.559 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.559 "name": "Existed_Raid", 00:18:22.559 "uuid": "e5073415-c826-41c0-bdad-b19bc585db05", 00:18:22.559 "strip_size_kb": 64, 00:18:22.559 "state": "configuring", 00:18:22.559 "raid_level": "raid5f", 00:18:22.559 "superblock": true, 00:18:22.559 "num_base_bdevs": 4, 00:18:22.559 "num_base_bdevs_discovered": 3, 00:18:22.559 "num_base_bdevs_operational": 4, 00:18:22.559 "base_bdevs_list": [ 00:18:22.559 { 00:18:22.559 "name": "BaseBdev1", 00:18:22.559 "uuid": "8ec59d52-6e70-4662-87ba-19171745a180", 00:18:22.559 "is_configured": true, 00:18:22.559 "data_offset": 2048, 00:18:22.559 "data_size": 63488 00:18:22.559 }, 00:18:22.559 { 00:18:22.559 "name": null, 00:18:22.559 "uuid": "c88b301b-0cf3-410b-8ad8-c41857ee4c90", 00:18:22.559 "is_configured": false, 00:18:22.559 "data_offset": 0, 00:18:22.559 "data_size": 63488 00:18:22.559 }, 00:18:22.559 { 00:18:22.559 "name": "BaseBdev3", 00:18:22.559 "uuid": "9121344d-0896-4037-97b7-f797ef655054", 00:18:22.559 "is_configured": true, 00:18:22.559 "data_offset": 2048, 00:18:22.559 "data_size": 63488 00:18:22.559 }, 00:18:22.559 { 00:18:22.559 "name": "BaseBdev4", 00:18:22.559 "uuid": "0fe47918-d7ef-40c0-9b7c-dd3d28a1f776", 00:18:22.559 "is_configured": true, 00:18:22.560 "data_offset": 2048, 00:18:22.560 "data_size": 63488 00:18:22.560 } 00:18:22.560 ] 00:18:22.560 }' 00:18:22.560 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.560 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.126 [2024-11-20 14:29:01.924779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.126 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.126 "name": "Existed_Raid", 00:18:23.126 "uuid": "e5073415-c826-41c0-bdad-b19bc585db05", 00:18:23.126 "strip_size_kb": 64, 00:18:23.126 "state": "configuring", 00:18:23.126 "raid_level": "raid5f", 00:18:23.126 "superblock": true, 00:18:23.126 "num_base_bdevs": 4, 00:18:23.126 "num_base_bdevs_discovered": 2, 00:18:23.126 "num_base_bdevs_operational": 4, 00:18:23.126 "base_bdevs_list": [ 00:18:23.126 { 00:18:23.126 "name": "BaseBdev1", 00:18:23.126 "uuid": "8ec59d52-6e70-4662-87ba-19171745a180", 00:18:23.126 "is_configured": true, 00:18:23.126 "data_offset": 2048, 00:18:23.126 "data_size": 63488 00:18:23.126 }, 00:18:23.126 { 00:18:23.126 "name": null, 00:18:23.126 "uuid": "c88b301b-0cf3-410b-8ad8-c41857ee4c90", 00:18:23.126 "is_configured": false, 00:18:23.126 "data_offset": 0, 00:18:23.126 "data_size": 63488 00:18:23.126 }, 00:18:23.126 { 00:18:23.126 "name": null, 00:18:23.126 "uuid": "9121344d-0896-4037-97b7-f797ef655054", 00:18:23.126 "is_configured": false, 00:18:23.126 "data_offset": 0, 00:18:23.126 "data_size": 63488 00:18:23.126 }, 00:18:23.127 { 00:18:23.127 "name": "BaseBdev4", 00:18:23.127 "uuid": "0fe47918-d7ef-40c0-9b7c-dd3d28a1f776", 00:18:23.127 "is_configured": true, 00:18:23.127 "data_offset": 2048, 00:18:23.127 "data_size": 63488 00:18:23.127 } 00:18:23.127 ] 00:18:23.127 }' 00:18:23.127 14:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.127 14:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.696 [2024-11-20 14:29:02.488897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.696 "name": "Existed_Raid", 00:18:23.696 "uuid": "e5073415-c826-41c0-bdad-b19bc585db05", 00:18:23.696 "strip_size_kb": 64, 00:18:23.696 "state": "configuring", 00:18:23.696 "raid_level": "raid5f", 00:18:23.696 "superblock": true, 00:18:23.696 "num_base_bdevs": 4, 00:18:23.696 "num_base_bdevs_discovered": 3, 00:18:23.696 "num_base_bdevs_operational": 4, 00:18:23.696 "base_bdevs_list": [ 00:18:23.696 { 00:18:23.696 "name": "BaseBdev1", 00:18:23.696 "uuid": "8ec59d52-6e70-4662-87ba-19171745a180", 00:18:23.696 "is_configured": true, 00:18:23.696 "data_offset": 2048, 00:18:23.696 "data_size": 63488 00:18:23.696 }, 00:18:23.696 { 00:18:23.696 "name": null, 00:18:23.696 "uuid": "c88b301b-0cf3-410b-8ad8-c41857ee4c90", 00:18:23.696 "is_configured": false, 00:18:23.696 "data_offset": 0, 00:18:23.696 "data_size": 63488 00:18:23.696 }, 00:18:23.696 { 00:18:23.696 "name": "BaseBdev3", 00:18:23.696 "uuid": "9121344d-0896-4037-97b7-f797ef655054", 00:18:23.696 "is_configured": true, 00:18:23.696 "data_offset": 2048, 00:18:23.696 "data_size": 63488 00:18:23.696 }, 00:18:23.696 { 00:18:23.696 "name": "BaseBdev4", 00:18:23.696 "uuid": "0fe47918-d7ef-40c0-9b7c-dd3d28a1f776", 00:18:23.696 "is_configured": true, 00:18:23.696 "data_offset": 2048, 00:18:23.696 "data_size": 63488 00:18:23.696 } 00:18:23.696 ] 00:18:23.696 }' 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.696 14:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.263 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:24.263 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.263 14:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.263 14:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.263 14:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.263 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:24.263 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:24.263 14:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.263 14:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.263 [2024-11-20 14:29:03.065211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:24.263 14:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.263 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:24.263 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:24.263 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:24.263 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:24.263 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:24.263 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:24.263 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.263 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.263 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.263 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.263 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.264 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.264 14:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.264 14:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.264 14:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.264 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.264 "name": "Existed_Raid", 00:18:24.264 "uuid": "e5073415-c826-41c0-bdad-b19bc585db05", 00:18:24.264 "strip_size_kb": 64, 00:18:24.264 "state": "configuring", 00:18:24.264 "raid_level": "raid5f", 00:18:24.264 "superblock": true, 00:18:24.264 "num_base_bdevs": 4, 00:18:24.264 "num_base_bdevs_discovered": 2, 00:18:24.264 "num_base_bdevs_operational": 4, 00:18:24.264 "base_bdevs_list": [ 00:18:24.264 { 00:18:24.264 "name": null, 00:18:24.264 "uuid": "8ec59d52-6e70-4662-87ba-19171745a180", 00:18:24.264 "is_configured": false, 00:18:24.264 "data_offset": 0, 00:18:24.264 "data_size": 63488 00:18:24.264 }, 00:18:24.264 { 00:18:24.264 "name": null, 00:18:24.264 "uuid": "c88b301b-0cf3-410b-8ad8-c41857ee4c90", 00:18:24.264 "is_configured": false, 00:18:24.264 "data_offset": 0, 00:18:24.264 "data_size": 63488 00:18:24.264 }, 00:18:24.264 { 00:18:24.264 "name": "BaseBdev3", 00:18:24.264 "uuid": "9121344d-0896-4037-97b7-f797ef655054", 00:18:24.264 "is_configured": true, 00:18:24.264 "data_offset": 2048, 00:18:24.264 "data_size": 63488 00:18:24.264 }, 00:18:24.264 { 00:18:24.264 "name": "BaseBdev4", 00:18:24.264 "uuid": "0fe47918-d7ef-40c0-9b7c-dd3d28a1f776", 00:18:24.264 "is_configured": true, 00:18:24.264 "data_offset": 2048, 00:18:24.264 "data_size": 63488 00:18:24.264 } 00:18:24.264 ] 00:18:24.264 }' 00:18:24.264 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.264 14:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.831 [2024-11-20 14:29:03.726066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.831 "name": "Existed_Raid", 00:18:24.831 "uuid": "e5073415-c826-41c0-bdad-b19bc585db05", 00:18:24.831 "strip_size_kb": 64, 00:18:24.831 "state": "configuring", 00:18:24.831 "raid_level": "raid5f", 00:18:24.831 "superblock": true, 00:18:24.831 "num_base_bdevs": 4, 00:18:24.831 "num_base_bdevs_discovered": 3, 00:18:24.831 "num_base_bdevs_operational": 4, 00:18:24.831 "base_bdevs_list": [ 00:18:24.831 { 00:18:24.831 "name": null, 00:18:24.831 "uuid": "8ec59d52-6e70-4662-87ba-19171745a180", 00:18:24.831 "is_configured": false, 00:18:24.831 "data_offset": 0, 00:18:24.831 "data_size": 63488 00:18:24.831 }, 00:18:24.831 { 00:18:24.831 "name": "BaseBdev2", 00:18:24.831 "uuid": "c88b301b-0cf3-410b-8ad8-c41857ee4c90", 00:18:24.831 "is_configured": true, 00:18:24.831 "data_offset": 2048, 00:18:24.831 "data_size": 63488 00:18:24.831 }, 00:18:24.831 { 00:18:24.831 "name": "BaseBdev3", 00:18:24.831 "uuid": "9121344d-0896-4037-97b7-f797ef655054", 00:18:24.831 "is_configured": true, 00:18:24.831 "data_offset": 2048, 00:18:24.831 "data_size": 63488 00:18:24.831 }, 00:18:24.831 { 00:18:24.831 "name": "BaseBdev4", 00:18:24.831 "uuid": "0fe47918-d7ef-40c0-9b7c-dd3d28a1f776", 00:18:24.831 "is_configured": true, 00:18:24.831 "data_offset": 2048, 00:18:24.831 "data_size": 63488 00:18:24.831 } 00:18:24.831 ] 00:18:24.831 }' 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.831 14:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.398 14:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.398 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.398 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.398 14:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:25.678 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.678 14:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:25.678 14:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.678 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.678 14:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:25.678 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.678 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.678 14:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8ec59d52-6e70-4662-87ba-19171745a180 00:18:25.678 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.678 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.678 [2024-11-20 14:29:04.512441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:25.678 [2024-11-20 14:29:04.512833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:25.678 [2024-11-20 14:29:04.512864] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:25.678 NewBaseBdev 00:18:25.678 [2024-11-20 14:29:04.513288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:25.678 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.678 14:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:25.678 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:25.678 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:25.678 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:25.678 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:25.678 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:25.678 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:25.678 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.678 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.678 [2024-11-20 14:29:04.521495] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:25.678 [2024-11-20 14:29:04.521540] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:25.678 [2024-11-20 14:29:04.521960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.678 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.678 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:25.678 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.678 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.678 [ 00:18:25.678 { 00:18:25.678 "name": "NewBaseBdev", 00:18:25.678 "aliases": [ 00:18:25.678 "8ec59d52-6e70-4662-87ba-19171745a180" 00:18:25.678 ], 00:18:25.678 "product_name": "Malloc disk", 00:18:25.678 "block_size": 512, 00:18:25.678 "num_blocks": 65536, 00:18:25.678 "uuid": "8ec59d52-6e70-4662-87ba-19171745a180", 00:18:25.678 "assigned_rate_limits": { 00:18:25.678 "rw_ios_per_sec": 0, 00:18:25.678 "rw_mbytes_per_sec": 0, 00:18:25.678 "r_mbytes_per_sec": 0, 00:18:25.678 "w_mbytes_per_sec": 0 00:18:25.678 }, 00:18:25.678 "claimed": true, 00:18:25.678 "claim_type": "exclusive_write", 00:18:25.678 "zoned": false, 00:18:25.678 "supported_io_types": { 00:18:25.678 "read": true, 00:18:25.678 "write": true, 00:18:25.678 "unmap": true, 00:18:25.678 "flush": true, 00:18:25.678 "reset": true, 00:18:25.678 "nvme_admin": false, 00:18:25.678 "nvme_io": false, 00:18:25.678 "nvme_io_md": false, 00:18:25.678 "write_zeroes": true, 00:18:25.678 "zcopy": true, 00:18:25.678 "get_zone_info": false, 00:18:25.678 "zone_management": false, 00:18:25.678 "zone_append": false, 00:18:25.678 "compare": false, 00:18:25.678 "compare_and_write": false, 00:18:25.678 "abort": true, 00:18:25.678 "seek_hole": false, 00:18:25.678 "seek_data": false, 00:18:25.678 "copy": true, 00:18:25.678 "nvme_iov_md": false 00:18:25.678 }, 00:18:25.678 "memory_domains": [ 00:18:25.678 { 00:18:25.678 "dma_device_id": "system", 00:18:25.678 "dma_device_type": 1 00:18:25.678 }, 00:18:25.678 { 00:18:25.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.679 "dma_device_type": 2 00:18:25.679 } 00:18:25.679 ], 00:18:25.679 "driver_specific": {} 00:18:25.679 } 00:18:25.679 ] 00:18:25.679 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.679 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:25.679 14:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:25.679 14:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:25.679 14:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.679 14:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:25.679 14:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:25.679 14:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:25.679 14:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.679 14:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.679 14:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.679 14:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.679 14:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.679 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.679 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.679 14:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.679 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.679 14:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.679 "name": "Existed_Raid", 00:18:25.679 "uuid": "e5073415-c826-41c0-bdad-b19bc585db05", 00:18:25.679 "strip_size_kb": 64, 00:18:25.679 "state": "online", 00:18:25.679 "raid_level": "raid5f", 00:18:25.679 "superblock": true, 00:18:25.679 "num_base_bdevs": 4, 00:18:25.679 "num_base_bdevs_discovered": 4, 00:18:25.679 "num_base_bdevs_operational": 4, 00:18:25.679 "base_bdevs_list": [ 00:18:25.679 { 00:18:25.679 "name": "NewBaseBdev", 00:18:25.679 "uuid": "8ec59d52-6e70-4662-87ba-19171745a180", 00:18:25.679 "is_configured": true, 00:18:25.679 "data_offset": 2048, 00:18:25.679 "data_size": 63488 00:18:25.679 }, 00:18:25.679 { 00:18:25.679 "name": "BaseBdev2", 00:18:25.679 "uuid": "c88b301b-0cf3-410b-8ad8-c41857ee4c90", 00:18:25.679 "is_configured": true, 00:18:25.679 "data_offset": 2048, 00:18:25.679 "data_size": 63488 00:18:25.679 }, 00:18:25.679 { 00:18:25.679 "name": "BaseBdev3", 00:18:25.679 "uuid": "9121344d-0896-4037-97b7-f797ef655054", 00:18:25.679 "is_configured": true, 00:18:25.679 "data_offset": 2048, 00:18:25.679 "data_size": 63488 00:18:25.679 }, 00:18:25.679 { 00:18:25.679 "name": "BaseBdev4", 00:18:25.679 "uuid": "0fe47918-d7ef-40c0-9b7c-dd3d28a1f776", 00:18:25.679 "is_configured": true, 00:18:25.679 "data_offset": 2048, 00:18:25.679 "data_size": 63488 00:18:25.679 } 00:18:25.679 ] 00:18:25.679 }' 00:18:25.679 14:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.679 14:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.270 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:26.270 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:26.270 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:26.270 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:26.270 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:26.270 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:26.270 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:26.270 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:26.270 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.270 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.270 [2024-11-20 14:29:05.087830] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.270 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.270 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:26.270 "name": "Existed_Raid", 00:18:26.270 "aliases": [ 00:18:26.270 "e5073415-c826-41c0-bdad-b19bc585db05" 00:18:26.270 ], 00:18:26.270 "product_name": "Raid Volume", 00:18:26.270 "block_size": 512, 00:18:26.270 "num_blocks": 190464, 00:18:26.270 "uuid": "e5073415-c826-41c0-bdad-b19bc585db05", 00:18:26.270 "assigned_rate_limits": { 00:18:26.270 "rw_ios_per_sec": 0, 00:18:26.270 "rw_mbytes_per_sec": 0, 00:18:26.270 "r_mbytes_per_sec": 0, 00:18:26.270 "w_mbytes_per_sec": 0 00:18:26.270 }, 00:18:26.270 "claimed": false, 00:18:26.270 "zoned": false, 00:18:26.270 "supported_io_types": { 00:18:26.270 "read": true, 00:18:26.270 "write": true, 00:18:26.270 "unmap": false, 00:18:26.270 "flush": false, 00:18:26.270 "reset": true, 00:18:26.270 "nvme_admin": false, 00:18:26.270 "nvme_io": false, 00:18:26.270 "nvme_io_md": false, 00:18:26.270 "write_zeroes": true, 00:18:26.270 "zcopy": false, 00:18:26.270 "get_zone_info": false, 00:18:26.270 "zone_management": false, 00:18:26.270 "zone_append": false, 00:18:26.270 "compare": false, 00:18:26.270 "compare_and_write": false, 00:18:26.270 "abort": false, 00:18:26.270 "seek_hole": false, 00:18:26.270 "seek_data": false, 00:18:26.270 "copy": false, 00:18:26.270 "nvme_iov_md": false 00:18:26.270 }, 00:18:26.270 "driver_specific": { 00:18:26.270 "raid": { 00:18:26.270 "uuid": "e5073415-c826-41c0-bdad-b19bc585db05", 00:18:26.270 "strip_size_kb": 64, 00:18:26.270 "state": "online", 00:18:26.270 "raid_level": "raid5f", 00:18:26.270 "superblock": true, 00:18:26.270 "num_base_bdevs": 4, 00:18:26.270 "num_base_bdevs_discovered": 4, 00:18:26.270 "num_base_bdevs_operational": 4, 00:18:26.270 "base_bdevs_list": [ 00:18:26.270 { 00:18:26.270 "name": "NewBaseBdev", 00:18:26.270 "uuid": "8ec59d52-6e70-4662-87ba-19171745a180", 00:18:26.270 "is_configured": true, 00:18:26.270 "data_offset": 2048, 00:18:26.270 "data_size": 63488 00:18:26.270 }, 00:18:26.270 { 00:18:26.270 "name": "BaseBdev2", 00:18:26.270 "uuid": "c88b301b-0cf3-410b-8ad8-c41857ee4c90", 00:18:26.270 "is_configured": true, 00:18:26.270 "data_offset": 2048, 00:18:26.270 "data_size": 63488 00:18:26.270 }, 00:18:26.270 { 00:18:26.270 "name": "BaseBdev3", 00:18:26.270 "uuid": "9121344d-0896-4037-97b7-f797ef655054", 00:18:26.270 "is_configured": true, 00:18:26.270 "data_offset": 2048, 00:18:26.270 "data_size": 63488 00:18:26.270 }, 00:18:26.270 { 00:18:26.270 "name": "BaseBdev4", 00:18:26.271 "uuid": "0fe47918-d7ef-40c0-9b7c-dd3d28a1f776", 00:18:26.271 "is_configured": true, 00:18:26.271 "data_offset": 2048, 00:18:26.271 "data_size": 63488 00:18:26.271 } 00:18:26.271 ] 00:18:26.271 } 00:18:26.271 } 00:18:26.271 }' 00:18:26.271 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:26.271 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:26.271 BaseBdev2 00:18:26.271 BaseBdev3 00:18:26.271 BaseBdev4' 00:18:26.271 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.271 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:26.271 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.271 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:26.271 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.271 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.271 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.530 [2024-11-20 14:29:05.463590] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:26.530 [2024-11-20 14:29:05.463629] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:26.530 [2024-11-20 14:29:05.463740] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:26.530 [2024-11-20 14:29:05.464152] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:26.530 [2024-11-20 14:29:05.464172] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83871 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83871 ']' 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83871 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83871 00:18:26.530 killing process with pid 83871 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83871' 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83871 00:18:26.530 [2024-11-20 14:29:05.502531] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:26.530 14:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83871 00:18:27.097 [2024-11-20 14:29:05.904666] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:28.033 14:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:28.033 00:18:28.033 real 0m13.179s 00:18:28.033 user 0m21.834s 00:18:28.033 sys 0m1.824s 00:18:28.033 14:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:28.033 14:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.033 ************************************ 00:18:28.033 END TEST raid5f_state_function_test_sb 00:18:28.033 ************************************ 00:18:28.293 14:29:07 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:18:28.293 14:29:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:28.293 14:29:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:28.293 14:29:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:28.293 ************************************ 00:18:28.293 START TEST raid5f_superblock_test 00:18:28.293 ************************************ 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84553 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84553 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84553 ']' 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.293 14:29:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.293 [2024-11-20 14:29:07.161879] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:18:28.293 [2024-11-20 14:29:07.162108] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84553 ] 00:18:28.553 [2024-11-20 14:29:07.351110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.553 [2024-11-20 14:29:07.508125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.812 [2024-11-20 14:29:07.729640] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.812 [2024-11-20 14:29:07.729760] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.380 malloc1 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.380 [2024-11-20 14:29:08.247037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:29.380 [2024-11-20 14:29:08.247344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.380 [2024-11-20 14:29:08.247428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:29.380 [2024-11-20 14:29:08.247555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.380 [2024-11-20 14:29:08.250509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.380 [2024-11-20 14:29:08.250712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:29.380 pt1 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.380 malloc2 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.380 [2024-11-20 14:29:08.303583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:29.380 [2024-11-20 14:29:08.303794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.380 [2024-11-20 14:29:08.303944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:29.380 [2024-11-20 14:29:08.304085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.380 [2024-11-20 14:29:08.306906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.380 [2024-11-20 14:29:08.307089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:29.380 pt2 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.380 malloc3 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.380 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.640 [2024-11-20 14:29:08.362800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:29.640 [2024-11-20 14:29:08.363048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.640 [2024-11-20 14:29:08.363130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:29.640 [2024-11-20 14:29:08.363254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.640 [2024-11-20 14:29:08.366200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.640 [2024-11-20 14:29:08.366361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:29.640 pt3 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.640 malloc4 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.640 [2024-11-20 14:29:08.416127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:29.640 [2024-11-20 14:29:08.416371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.640 [2024-11-20 14:29:08.416449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:29.640 [2024-11-20 14:29:08.416559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.640 [2024-11-20 14:29:08.419411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.640 [2024-11-20 14:29:08.419599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:29.640 pt4 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.640 [2024-11-20 14:29:08.424367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:29.640 [2024-11-20 14:29:08.426869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:29.640 [2024-11-20 14:29:08.427179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:29.640 [2024-11-20 14:29:08.427428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:29.640 [2024-11-20 14:29:08.427725] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:29.640 [2024-11-20 14:29:08.427751] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:29.640 [2024-11-20 14:29:08.428096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:29.640 [2024-11-20 14:29:08.434994] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:29.640 [2024-11-20 14:29:08.435202] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:29.640 [2024-11-20 14:29:08.435643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.640 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.641 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.641 "name": "raid_bdev1", 00:18:29.641 "uuid": "6953f3bc-2176-4e64-91f8-9def348be37d", 00:18:29.641 "strip_size_kb": 64, 00:18:29.641 "state": "online", 00:18:29.641 "raid_level": "raid5f", 00:18:29.641 "superblock": true, 00:18:29.641 "num_base_bdevs": 4, 00:18:29.641 "num_base_bdevs_discovered": 4, 00:18:29.641 "num_base_bdevs_operational": 4, 00:18:29.641 "base_bdevs_list": [ 00:18:29.641 { 00:18:29.641 "name": "pt1", 00:18:29.641 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:29.641 "is_configured": true, 00:18:29.641 "data_offset": 2048, 00:18:29.641 "data_size": 63488 00:18:29.641 }, 00:18:29.641 { 00:18:29.641 "name": "pt2", 00:18:29.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:29.641 "is_configured": true, 00:18:29.641 "data_offset": 2048, 00:18:29.641 "data_size": 63488 00:18:29.641 }, 00:18:29.641 { 00:18:29.641 "name": "pt3", 00:18:29.641 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:29.641 "is_configured": true, 00:18:29.641 "data_offset": 2048, 00:18:29.641 "data_size": 63488 00:18:29.641 }, 00:18:29.641 { 00:18:29.641 "name": "pt4", 00:18:29.641 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:29.641 "is_configured": true, 00:18:29.641 "data_offset": 2048, 00:18:29.641 "data_size": 63488 00:18:29.641 } 00:18:29.641 ] 00:18:29.641 }' 00:18:29.641 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.641 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.209 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:30.209 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:30.209 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:30.209 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:30.209 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:30.209 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:30.209 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:30.209 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:30.209 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.209 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.209 [2024-11-20 14:29:08.947605] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:30.209 14:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.209 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:30.209 "name": "raid_bdev1", 00:18:30.209 "aliases": [ 00:18:30.209 "6953f3bc-2176-4e64-91f8-9def348be37d" 00:18:30.209 ], 00:18:30.209 "product_name": "Raid Volume", 00:18:30.209 "block_size": 512, 00:18:30.209 "num_blocks": 190464, 00:18:30.209 "uuid": "6953f3bc-2176-4e64-91f8-9def348be37d", 00:18:30.209 "assigned_rate_limits": { 00:18:30.209 "rw_ios_per_sec": 0, 00:18:30.209 "rw_mbytes_per_sec": 0, 00:18:30.209 "r_mbytes_per_sec": 0, 00:18:30.209 "w_mbytes_per_sec": 0 00:18:30.209 }, 00:18:30.209 "claimed": false, 00:18:30.209 "zoned": false, 00:18:30.209 "supported_io_types": { 00:18:30.209 "read": true, 00:18:30.209 "write": true, 00:18:30.209 "unmap": false, 00:18:30.209 "flush": false, 00:18:30.209 "reset": true, 00:18:30.209 "nvme_admin": false, 00:18:30.209 "nvme_io": false, 00:18:30.209 "nvme_io_md": false, 00:18:30.209 "write_zeroes": true, 00:18:30.209 "zcopy": false, 00:18:30.209 "get_zone_info": false, 00:18:30.209 "zone_management": false, 00:18:30.209 "zone_append": false, 00:18:30.209 "compare": false, 00:18:30.209 "compare_and_write": false, 00:18:30.209 "abort": false, 00:18:30.209 "seek_hole": false, 00:18:30.209 "seek_data": false, 00:18:30.209 "copy": false, 00:18:30.209 "nvme_iov_md": false 00:18:30.209 }, 00:18:30.209 "driver_specific": { 00:18:30.209 "raid": { 00:18:30.209 "uuid": "6953f3bc-2176-4e64-91f8-9def348be37d", 00:18:30.209 "strip_size_kb": 64, 00:18:30.209 "state": "online", 00:18:30.209 "raid_level": "raid5f", 00:18:30.209 "superblock": true, 00:18:30.209 "num_base_bdevs": 4, 00:18:30.209 "num_base_bdevs_discovered": 4, 00:18:30.209 "num_base_bdevs_operational": 4, 00:18:30.209 "base_bdevs_list": [ 00:18:30.209 { 00:18:30.209 "name": "pt1", 00:18:30.209 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:30.209 "is_configured": true, 00:18:30.209 "data_offset": 2048, 00:18:30.209 "data_size": 63488 00:18:30.209 }, 00:18:30.209 { 00:18:30.209 "name": "pt2", 00:18:30.209 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:30.209 "is_configured": true, 00:18:30.209 "data_offset": 2048, 00:18:30.209 "data_size": 63488 00:18:30.209 }, 00:18:30.209 { 00:18:30.209 "name": "pt3", 00:18:30.209 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:30.209 "is_configured": true, 00:18:30.209 "data_offset": 2048, 00:18:30.209 "data_size": 63488 00:18:30.209 }, 00:18:30.209 { 00:18:30.209 "name": "pt4", 00:18:30.209 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:30.209 "is_configured": true, 00:18:30.209 "data_offset": 2048, 00:18:30.209 "data_size": 63488 00:18:30.209 } 00:18:30.209 ] 00:18:30.209 } 00:18:30.209 } 00:18:30.209 }' 00:18:30.209 14:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:30.209 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:30.209 pt2 00:18:30.209 pt3 00:18:30.209 pt4' 00:18:30.209 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:30.209 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:30.209 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:30.209 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:30.209 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.209 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.209 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:30.209 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.209 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:30.209 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:30.209 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:30.209 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:30.209 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.209 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:30.209 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.209 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:30.468 [2024-11-20 14:29:09.307701] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6953f3bc-2176-4e64-91f8-9def348be37d 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6953f3bc-2176-4e64-91f8-9def348be37d ']' 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.468 [2024-11-20 14:29:09.351484] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:30.468 [2024-11-20 14:29:09.351645] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:30.468 [2024-11-20 14:29:09.351893] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.468 [2024-11-20 14:29:09.352070] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:30.468 [2024-11-20 14:29:09.352103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.468 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.728 [2024-11-20 14:29:09.499554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:30.728 [2024-11-20 14:29:09.502311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:30.728 [2024-11-20 14:29:09.502410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:30.728 [2024-11-20 14:29:09.502462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:30.728 [2024-11-20 14:29:09.502531] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:30.728 [2024-11-20 14:29:09.502660] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:30.728 [2024-11-20 14:29:09.502695] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:30.728 [2024-11-20 14:29:09.502728] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:30.728 [2024-11-20 14:29:09.502751] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:30.728 [2024-11-20 14:29:09.502767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:30.728 request: 00:18:30.728 { 00:18:30.728 "name": "raid_bdev1", 00:18:30.728 "raid_level": "raid5f", 00:18:30.728 "base_bdevs": [ 00:18:30.728 "malloc1", 00:18:30.728 "malloc2", 00:18:30.728 "malloc3", 00:18:30.728 "malloc4" 00:18:30.728 ], 00:18:30.728 "strip_size_kb": 64, 00:18:30.728 "superblock": false, 00:18:30.728 "method": "bdev_raid_create", 00:18:30.728 "req_id": 1 00:18:30.728 } 00:18:30.728 Got JSON-RPC error response 00:18:30.728 response: 00:18:30.728 { 00:18:30.728 "code": -17, 00:18:30.728 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:30.728 } 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.728 [2024-11-20 14:29:09.563535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:30.728 [2024-11-20 14:29:09.563622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.728 [2024-11-20 14:29:09.563649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:30.728 [2024-11-20 14:29:09.563666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.728 [2024-11-20 14:29:09.566628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.728 [2024-11-20 14:29:09.566695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:30.728 [2024-11-20 14:29:09.566818] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:30.728 [2024-11-20 14:29:09.566894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:30.728 pt1 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.728 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.728 "name": "raid_bdev1", 00:18:30.728 "uuid": "6953f3bc-2176-4e64-91f8-9def348be37d", 00:18:30.728 "strip_size_kb": 64, 00:18:30.728 "state": "configuring", 00:18:30.728 "raid_level": "raid5f", 00:18:30.728 "superblock": true, 00:18:30.728 "num_base_bdevs": 4, 00:18:30.728 "num_base_bdevs_discovered": 1, 00:18:30.728 "num_base_bdevs_operational": 4, 00:18:30.728 "base_bdevs_list": [ 00:18:30.728 { 00:18:30.728 "name": "pt1", 00:18:30.728 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:30.728 "is_configured": true, 00:18:30.728 "data_offset": 2048, 00:18:30.728 "data_size": 63488 00:18:30.728 }, 00:18:30.728 { 00:18:30.728 "name": null, 00:18:30.728 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:30.728 "is_configured": false, 00:18:30.728 "data_offset": 2048, 00:18:30.728 "data_size": 63488 00:18:30.728 }, 00:18:30.728 { 00:18:30.728 "name": null, 00:18:30.728 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:30.728 "is_configured": false, 00:18:30.728 "data_offset": 2048, 00:18:30.728 "data_size": 63488 00:18:30.728 }, 00:18:30.728 { 00:18:30.728 "name": null, 00:18:30.728 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:30.728 "is_configured": false, 00:18:30.728 "data_offset": 2048, 00:18:30.729 "data_size": 63488 00:18:30.729 } 00:18:30.729 ] 00:18:30.729 }' 00:18:30.729 14:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.729 14:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.296 [2024-11-20 14:29:10.095718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:31.296 [2024-11-20 14:29:10.095841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.296 [2024-11-20 14:29:10.095870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:31.296 [2024-11-20 14:29:10.095887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.296 [2024-11-20 14:29:10.096463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.296 [2024-11-20 14:29:10.096506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:31.296 [2024-11-20 14:29:10.096610] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:31.296 [2024-11-20 14:29:10.096649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:31.296 pt2 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.296 [2024-11-20 14:29:10.103688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.296 "name": "raid_bdev1", 00:18:31.296 "uuid": "6953f3bc-2176-4e64-91f8-9def348be37d", 00:18:31.296 "strip_size_kb": 64, 00:18:31.296 "state": "configuring", 00:18:31.296 "raid_level": "raid5f", 00:18:31.296 "superblock": true, 00:18:31.296 "num_base_bdevs": 4, 00:18:31.296 "num_base_bdevs_discovered": 1, 00:18:31.296 "num_base_bdevs_operational": 4, 00:18:31.296 "base_bdevs_list": [ 00:18:31.296 { 00:18:31.296 "name": "pt1", 00:18:31.296 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:31.296 "is_configured": true, 00:18:31.296 "data_offset": 2048, 00:18:31.296 "data_size": 63488 00:18:31.296 }, 00:18:31.296 { 00:18:31.296 "name": null, 00:18:31.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.296 "is_configured": false, 00:18:31.296 "data_offset": 0, 00:18:31.296 "data_size": 63488 00:18:31.296 }, 00:18:31.296 { 00:18:31.296 "name": null, 00:18:31.296 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:31.296 "is_configured": false, 00:18:31.296 "data_offset": 2048, 00:18:31.296 "data_size": 63488 00:18:31.296 }, 00:18:31.296 { 00:18:31.296 "name": null, 00:18:31.296 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:31.296 "is_configured": false, 00:18:31.296 "data_offset": 2048, 00:18:31.296 "data_size": 63488 00:18:31.296 } 00:18:31.296 ] 00:18:31.296 }' 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.296 14:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.865 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:31.865 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:31.865 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:31.865 14:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.865 14:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.865 [2024-11-20 14:29:10.647868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:31.865 [2024-11-20 14:29:10.647954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.865 [2024-11-20 14:29:10.648002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:31.865 [2024-11-20 14:29:10.648020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.865 [2024-11-20 14:29:10.648582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.865 [2024-11-20 14:29:10.648608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:31.865 [2024-11-20 14:29:10.648710] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:31.865 [2024-11-20 14:29:10.648743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:31.865 pt2 00:18:31.865 14:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.865 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:31.865 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.866 [2024-11-20 14:29:10.655790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:31.866 [2024-11-20 14:29:10.655848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.866 [2024-11-20 14:29:10.655881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:31.866 [2024-11-20 14:29:10.655897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.866 [2024-11-20 14:29:10.656363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.866 [2024-11-20 14:29:10.656388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:31.866 [2024-11-20 14:29:10.656468] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:31.866 [2024-11-20 14:29:10.656503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:31.866 pt3 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.866 [2024-11-20 14:29:10.663770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:31.866 [2024-11-20 14:29:10.663827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.866 [2024-11-20 14:29:10.663853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:31.866 [2024-11-20 14:29:10.663866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.866 [2024-11-20 14:29:10.664355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.866 [2024-11-20 14:29:10.664387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:31.866 [2024-11-20 14:29:10.664479] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:31.866 [2024-11-20 14:29:10.664512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:31.866 [2024-11-20 14:29:10.664685] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:31.866 [2024-11-20 14:29:10.664702] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:31.866 [2024-11-20 14:29:10.665031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:31.866 [2024-11-20 14:29:10.671594] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:31.866 [2024-11-20 14:29:10.671633] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:31.866 pt4 00:18:31.866 [2024-11-20 14:29:10.671887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.866 "name": "raid_bdev1", 00:18:31.866 "uuid": "6953f3bc-2176-4e64-91f8-9def348be37d", 00:18:31.866 "strip_size_kb": 64, 00:18:31.866 "state": "online", 00:18:31.866 "raid_level": "raid5f", 00:18:31.866 "superblock": true, 00:18:31.866 "num_base_bdevs": 4, 00:18:31.866 "num_base_bdevs_discovered": 4, 00:18:31.866 "num_base_bdevs_operational": 4, 00:18:31.866 "base_bdevs_list": [ 00:18:31.866 { 00:18:31.866 "name": "pt1", 00:18:31.866 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:31.866 "is_configured": true, 00:18:31.866 "data_offset": 2048, 00:18:31.866 "data_size": 63488 00:18:31.866 }, 00:18:31.866 { 00:18:31.866 "name": "pt2", 00:18:31.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.866 "is_configured": true, 00:18:31.866 "data_offset": 2048, 00:18:31.866 "data_size": 63488 00:18:31.866 }, 00:18:31.866 { 00:18:31.866 "name": "pt3", 00:18:31.866 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:31.866 "is_configured": true, 00:18:31.866 "data_offset": 2048, 00:18:31.866 "data_size": 63488 00:18:31.866 }, 00:18:31.866 { 00:18:31.866 "name": "pt4", 00:18:31.866 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:31.866 "is_configured": true, 00:18:31.866 "data_offset": 2048, 00:18:31.866 "data_size": 63488 00:18:31.866 } 00:18:31.866 ] 00:18:31.866 }' 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.866 14:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.434 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:32.434 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:32.434 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:32.434 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:32.434 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:32.434 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:32.434 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:32.434 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:32.434 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.434 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.434 [2024-11-20 14:29:11.211822] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:32.434 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.434 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:32.434 "name": "raid_bdev1", 00:18:32.434 "aliases": [ 00:18:32.434 "6953f3bc-2176-4e64-91f8-9def348be37d" 00:18:32.434 ], 00:18:32.434 "product_name": "Raid Volume", 00:18:32.434 "block_size": 512, 00:18:32.434 "num_blocks": 190464, 00:18:32.434 "uuid": "6953f3bc-2176-4e64-91f8-9def348be37d", 00:18:32.434 "assigned_rate_limits": { 00:18:32.434 "rw_ios_per_sec": 0, 00:18:32.434 "rw_mbytes_per_sec": 0, 00:18:32.434 "r_mbytes_per_sec": 0, 00:18:32.434 "w_mbytes_per_sec": 0 00:18:32.434 }, 00:18:32.434 "claimed": false, 00:18:32.434 "zoned": false, 00:18:32.434 "supported_io_types": { 00:18:32.434 "read": true, 00:18:32.434 "write": true, 00:18:32.434 "unmap": false, 00:18:32.434 "flush": false, 00:18:32.434 "reset": true, 00:18:32.434 "nvme_admin": false, 00:18:32.434 "nvme_io": false, 00:18:32.434 "nvme_io_md": false, 00:18:32.434 "write_zeroes": true, 00:18:32.434 "zcopy": false, 00:18:32.434 "get_zone_info": false, 00:18:32.434 "zone_management": false, 00:18:32.434 "zone_append": false, 00:18:32.434 "compare": false, 00:18:32.434 "compare_and_write": false, 00:18:32.434 "abort": false, 00:18:32.434 "seek_hole": false, 00:18:32.434 "seek_data": false, 00:18:32.434 "copy": false, 00:18:32.434 "nvme_iov_md": false 00:18:32.434 }, 00:18:32.434 "driver_specific": { 00:18:32.434 "raid": { 00:18:32.434 "uuid": "6953f3bc-2176-4e64-91f8-9def348be37d", 00:18:32.434 "strip_size_kb": 64, 00:18:32.434 "state": "online", 00:18:32.434 "raid_level": "raid5f", 00:18:32.434 "superblock": true, 00:18:32.434 "num_base_bdevs": 4, 00:18:32.434 "num_base_bdevs_discovered": 4, 00:18:32.434 "num_base_bdevs_operational": 4, 00:18:32.434 "base_bdevs_list": [ 00:18:32.434 { 00:18:32.434 "name": "pt1", 00:18:32.434 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:32.434 "is_configured": true, 00:18:32.434 "data_offset": 2048, 00:18:32.434 "data_size": 63488 00:18:32.434 }, 00:18:32.434 { 00:18:32.434 "name": "pt2", 00:18:32.434 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:32.434 "is_configured": true, 00:18:32.434 "data_offset": 2048, 00:18:32.434 "data_size": 63488 00:18:32.434 }, 00:18:32.434 { 00:18:32.434 "name": "pt3", 00:18:32.434 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:32.434 "is_configured": true, 00:18:32.435 "data_offset": 2048, 00:18:32.435 "data_size": 63488 00:18:32.435 }, 00:18:32.435 { 00:18:32.435 "name": "pt4", 00:18:32.435 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:32.435 "is_configured": true, 00:18:32.435 "data_offset": 2048, 00:18:32.435 "data_size": 63488 00:18:32.435 } 00:18:32.435 ] 00:18:32.435 } 00:18:32.435 } 00:18:32.435 }' 00:18:32.435 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:32.435 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:32.435 pt2 00:18:32.435 pt3 00:18:32.435 pt4' 00:18:32.435 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.435 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:32.435 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.435 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:32.435 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.435 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.435 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.435 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.435 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:32.435 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:32.435 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:32.694 [2024-11-20 14:29:11.583876] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6953f3bc-2176-4e64-91f8-9def348be37d '!=' 6953f3bc-2176-4e64-91f8-9def348be37d ']' 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.694 [2024-11-20 14:29:11.635714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.694 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.953 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.953 "name": "raid_bdev1", 00:18:32.953 "uuid": "6953f3bc-2176-4e64-91f8-9def348be37d", 00:18:32.953 "strip_size_kb": 64, 00:18:32.953 "state": "online", 00:18:32.953 "raid_level": "raid5f", 00:18:32.953 "superblock": true, 00:18:32.953 "num_base_bdevs": 4, 00:18:32.953 "num_base_bdevs_discovered": 3, 00:18:32.953 "num_base_bdevs_operational": 3, 00:18:32.953 "base_bdevs_list": [ 00:18:32.953 { 00:18:32.953 "name": null, 00:18:32.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.953 "is_configured": false, 00:18:32.953 "data_offset": 0, 00:18:32.953 "data_size": 63488 00:18:32.953 }, 00:18:32.953 { 00:18:32.953 "name": "pt2", 00:18:32.953 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:32.953 "is_configured": true, 00:18:32.953 "data_offset": 2048, 00:18:32.953 "data_size": 63488 00:18:32.953 }, 00:18:32.953 { 00:18:32.953 "name": "pt3", 00:18:32.953 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:32.953 "is_configured": true, 00:18:32.953 "data_offset": 2048, 00:18:32.953 "data_size": 63488 00:18:32.953 }, 00:18:32.953 { 00:18:32.953 "name": "pt4", 00:18:32.953 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:32.953 "is_configured": true, 00:18:32.953 "data_offset": 2048, 00:18:32.953 "data_size": 63488 00:18:32.953 } 00:18:32.953 ] 00:18:32.953 }' 00:18:32.953 14:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.953 14:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.212 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:33.212 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.212 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.212 [2024-11-20 14:29:12.171900] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:33.212 [2024-11-20 14:29:12.171958] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:33.212 [2024-11-20 14:29:12.172081] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.212 [2024-11-20 14:29:12.172187] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:33.212 [2024-11-20 14:29:12.172203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:33.212 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.212 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:33.212 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.212 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.212 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.212 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.471 [2024-11-20 14:29:12.259917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:33.471 [2024-11-20 14:29:12.260036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.471 [2024-11-20 14:29:12.260067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:33.471 [2024-11-20 14:29:12.260081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.471 [2024-11-20 14:29:12.262994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.471 [2024-11-20 14:29:12.263085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:33.471 [2024-11-20 14:29:12.263189] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:33.471 [2024-11-20 14:29:12.263248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:33.471 pt2 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.471 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.471 "name": "raid_bdev1", 00:18:33.471 "uuid": "6953f3bc-2176-4e64-91f8-9def348be37d", 00:18:33.471 "strip_size_kb": 64, 00:18:33.471 "state": "configuring", 00:18:33.471 "raid_level": "raid5f", 00:18:33.471 "superblock": true, 00:18:33.471 "num_base_bdevs": 4, 00:18:33.471 "num_base_bdevs_discovered": 1, 00:18:33.471 "num_base_bdevs_operational": 3, 00:18:33.471 "base_bdevs_list": [ 00:18:33.471 { 00:18:33.471 "name": null, 00:18:33.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.471 "is_configured": false, 00:18:33.471 "data_offset": 2048, 00:18:33.471 "data_size": 63488 00:18:33.471 }, 00:18:33.471 { 00:18:33.471 "name": "pt2", 00:18:33.471 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:33.471 "is_configured": true, 00:18:33.471 "data_offset": 2048, 00:18:33.471 "data_size": 63488 00:18:33.471 }, 00:18:33.471 { 00:18:33.471 "name": null, 00:18:33.471 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:33.471 "is_configured": false, 00:18:33.471 "data_offset": 2048, 00:18:33.472 "data_size": 63488 00:18:33.472 }, 00:18:33.472 { 00:18:33.472 "name": null, 00:18:33.472 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:33.472 "is_configured": false, 00:18:33.472 "data_offset": 2048, 00:18:33.472 "data_size": 63488 00:18:33.472 } 00:18:33.472 ] 00:18:33.472 }' 00:18:33.472 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.472 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.040 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:34.040 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:34.040 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:34.040 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.040 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.040 [2024-11-20 14:29:12.784150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:34.040 [2024-11-20 14:29:12.784273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.040 [2024-11-20 14:29:12.784310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:34.040 [2024-11-20 14:29:12.784325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.040 [2024-11-20 14:29:12.784883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.040 [2024-11-20 14:29:12.784918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:34.040 [2024-11-20 14:29:12.785082] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:34.040 [2024-11-20 14:29:12.785116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:34.040 pt3 00:18:34.040 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.040 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:34.040 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.040 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:34.040 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:34.040 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:34.040 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:34.040 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.040 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.040 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.040 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.040 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.040 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.040 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.040 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.040 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.040 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.040 "name": "raid_bdev1", 00:18:34.040 "uuid": "6953f3bc-2176-4e64-91f8-9def348be37d", 00:18:34.040 "strip_size_kb": 64, 00:18:34.040 "state": "configuring", 00:18:34.040 "raid_level": "raid5f", 00:18:34.040 "superblock": true, 00:18:34.040 "num_base_bdevs": 4, 00:18:34.040 "num_base_bdevs_discovered": 2, 00:18:34.040 "num_base_bdevs_operational": 3, 00:18:34.040 "base_bdevs_list": [ 00:18:34.040 { 00:18:34.040 "name": null, 00:18:34.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.040 "is_configured": false, 00:18:34.040 "data_offset": 2048, 00:18:34.040 "data_size": 63488 00:18:34.040 }, 00:18:34.040 { 00:18:34.040 "name": "pt2", 00:18:34.040 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:34.040 "is_configured": true, 00:18:34.040 "data_offset": 2048, 00:18:34.040 "data_size": 63488 00:18:34.040 }, 00:18:34.040 { 00:18:34.040 "name": "pt3", 00:18:34.040 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:34.040 "is_configured": true, 00:18:34.040 "data_offset": 2048, 00:18:34.040 "data_size": 63488 00:18:34.040 }, 00:18:34.040 { 00:18:34.040 "name": null, 00:18:34.040 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:34.040 "is_configured": false, 00:18:34.040 "data_offset": 2048, 00:18:34.040 "data_size": 63488 00:18:34.040 } 00:18:34.040 ] 00:18:34.040 }' 00:18:34.040 14:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.040 14:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.365 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:34.365 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:34.365 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:18:34.365 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:34.365 14:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.365 14:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.633 [2024-11-20 14:29:13.320297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:34.633 [2024-11-20 14:29:13.320441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.633 [2024-11-20 14:29:13.320471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:34.633 [2024-11-20 14:29:13.320485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.633 [2024-11-20 14:29:13.321074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.633 [2024-11-20 14:29:13.321110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:34.633 [2024-11-20 14:29:13.321215] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:34.633 [2024-11-20 14:29:13.321255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:34.633 [2024-11-20 14:29:13.321437] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:34.633 [2024-11-20 14:29:13.321452] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:34.633 [2024-11-20 14:29:13.321757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:34.633 [2024-11-20 14:29:13.328379] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:34.633 [2024-11-20 14:29:13.328430] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:34.633 [2024-11-20 14:29:13.328798] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.633 pt4 00:18:34.633 14:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.633 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:34.633 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.633 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.633 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:34.633 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:34.633 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:34.633 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.634 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.634 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.634 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.634 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.634 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.634 14:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.634 14:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.634 14:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.634 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.634 "name": "raid_bdev1", 00:18:34.634 "uuid": "6953f3bc-2176-4e64-91f8-9def348be37d", 00:18:34.634 "strip_size_kb": 64, 00:18:34.634 "state": "online", 00:18:34.634 "raid_level": "raid5f", 00:18:34.634 "superblock": true, 00:18:34.634 "num_base_bdevs": 4, 00:18:34.634 "num_base_bdevs_discovered": 3, 00:18:34.634 "num_base_bdevs_operational": 3, 00:18:34.634 "base_bdevs_list": [ 00:18:34.634 { 00:18:34.634 "name": null, 00:18:34.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.634 "is_configured": false, 00:18:34.634 "data_offset": 2048, 00:18:34.634 "data_size": 63488 00:18:34.634 }, 00:18:34.634 { 00:18:34.634 "name": "pt2", 00:18:34.634 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:34.634 "is_configured": true, 00:18:34.634 "data_offset": 2048, 00:18:34.634 "data_size": 63488 00:18:34.634 }, 00:18:34.634 { 00:18:34.634 "name": "pt3", 00:18:34.634 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:34.634 "is_configured": true, 00:18:34.634 "data_offset": 2048, 00:18:34.634 "data_size": 63488 00:18:34.634 }, 00:18:34.634 { 00:18:34.634 "name": "pt4", 00:18:34.634 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:34.634 "is_configured": true, 00:18:34.634 "data_offset": 2048, 00:18:34.634 "data_size": 63488 00:18:34.634 } 00:18:34.634 ] 00:18:34.634 }' 00:18:34.634 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.634 14:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.893 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:34.893 14:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.893 14:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.893 [2024-11-20 14:29:13.864466] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:34.893 [2024-11-20 14:29:13.864502] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:34.893 [2024-11-20 14:29:13.864599] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:34.893 [2024-11-20 14:29:13.864709] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:34.893 [2024-11-20 14:29:13.864730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:34.893 14:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.893 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.893 14:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.893 14:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.893 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:35.152 14:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.152 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:35.152 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:35.152 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:18:35.152 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:18:35.152 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:18:35.152 14:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.152 14:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.152 14:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.152 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:35.152 14:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.152 14:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.152 [2024-11-20 14:29:13.936488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:35.152 [2024-11-20 14:29:13.936709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.152 [2024-11-20 14:29:13.936755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:18:35.152 [2024-11-20 14:29:13.936776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.152 [2024-11-20 14:29:13.939796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.152 pt1 00:18:35.152 [2024-11-20 14:29:13.940010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:35.152 [2024-11-20 14:29:13.940129] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:35.152 [2024-11-20 14:29:13.940197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:35.152 [2024-11-20 14:29:13.940366] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:35.152 [2024-11-20 14:29:13.940390] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:35.152 [2024-11-20 14:29:13.940411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:35.152 [2024-11-20 14:29:13.940531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:35.152 [2024-11-20 14:29:13.940679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:35.152 14:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.152 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:18:35.152 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:35.152 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.152 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:35.152 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:35.152 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:35.152 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:35.152 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.152 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.152 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.152 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.153 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.153 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.153 14:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.153 14:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.153 14:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.153 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.153 "name": "raid_bdev1", 00:18:35.153 "uuid": "6953f3bc-2176-4e64-91f8-9def348be37d", 00:18:35.153 "strip_size_kb": 64, 00:18:35.153 "state": "configuring", 00:18:35.153 "raid_level": "raid5f", 00:18:35.153 "superblock": true, 00:18:35.153 "num_base_bdevs": 4, 00:18:35.153 "num_base_bdevs_discovered": 2, 00:18:35.153 "num_base_bdevs_operational": 3, 00:18:35.153 "base_bdevs_list": [ 00:18:35.153 { 00:18:35.153 "name": null, 00:18:35.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.153 "is_configured": false, 00:18:35.153 "data_offset": 2048, 00:18:35.153 "data_size": 63488 00:18:35.153 }, 00:18:35.153 { 00:18:35.153 "name": "pt2", 00:18:35.153 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:35.153 "is_configured": true, 00:18:35.153 "data_offset": 2048, 00:18:35.153 "data_size": 63488 00:18:35.153 }, 00:18:35.153 { 00:18:35.153 "name": "pt3", 00:18:35.153 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:35.153 "is_configured": true, 00:18:35.153 "data_offset": 2048, 00:18:35.153 "data_size": 63488 00:18:35.153 }, 00:18:35.153 { 00:18:35.153 "name": null, 00:18:35.153 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:35.153 "is_configured": false, 00:18:35.153 "data_offset": 2048, 00:18:35.153 "data_size": 63488 00:18:35.153 } 00:18:35.153 ] 00:18:35.153 }' 00:18:35.153 14:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.153 14:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.720 [2024-11-20 14:29:14.516941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:35.720 [2024-11-20 14:29:14.517178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.720 [2024-11-20 14:29:14.517226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:35.720 [2024-11-20 14:29:14.517243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.720 [2024-11-20 14:29:14.517798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.720 [2024-11-20 14:29:14.517825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:35.720 [2024-11-20 14:29:14.517961] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:35.720 [2024-11-20 14:29:14.517993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:35.720 [2024-11-20 14:29:14.518183] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:35.720 [2024-11-20 14:29:14.518200] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:35.720 [2024-11-20 14:29:14.518569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:35.720 pt4 00:18:35.720 [2024-11-20 14:29:14.525273] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:35.720 [2024-11-20 14:29:14.525309] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:35.720 [2024-11-20 14:29:14.525665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.720 "name": "raid_bdev1", 00:18:35.720 "uuid": "6953f3bc-2176-4e64-91f8-9def348be37d", 00:18:35.720 "strip_size_kb": 64, 00:18:35.720 "state": "online", 00:18:35.720 "raid_level": "raid5f", 00:18:35.720 "superblock": true, 00:18:35.720 "num_base_bdevs": 4, 00:18:35.720 "num_base_bdevs_discovered": 3, 00:18:35.720 "num_base_bdevs_operational": 3, 00:18:35.720 "base_bdevs_list": [ 00:18:35.720 { 00:18:35.720 "name": null, 00:18:35.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.720 "is_configured": false, 00:18:35.720 "data_offset": 2048, 00:18:35.720 "data_size": 63488 00:18:35.720 }, 00:18:35.720 { 00:18:35.720 "name": "pt2", 00:18:35.720 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:35.720 "is_configured": true, 00:18:35.720 "data_offset": 2048, 00:18:35.720 "data_size": 63488 00:18:35.720 }, 00:18:35.720 { 00:18:35.720 "name": "pt3", 00:18:35.720 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:35.720 "is_configured": true, 00:18:35.720 "data_offset": 2048, 00:18:35.720 "data_size": 63488 00:18:35.720 }, 00:18:35.720 { 00:18:35.720 "name": "pt4", 00:18:35.720 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:35.720 "is_configured": true, 00:18:35.720 "data_offset": 2048, 00:18:35.720 "data_size": 63488 00:18:35.720 } 00:18:35.720 ] 00:18:35.720 }' 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.720 14:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.288 14:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:36.288 14:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.288 14:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:36.288 14:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.288 14:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.288 14:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:36.288 14:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:36.288 14:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:36.288 14:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.288 14:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.288 [2024-11-20 14:29:15.105388] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:36.288 14:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.288 14:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 6953f3bc-2176-4e64-91f8-9def348be37d '!=' 6953f3bc-2176-4e64-91f8-9def348be37d ']' 00:18:36.288 14:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84553 00:18:36.288 14:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84553 ']' 00:18:36.288 14:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84553 00:18:36.288 14:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:18:36.288 14:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.288 14:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84553 00:18:36.289 killing process with pid 84553 00:18:36.289 14:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:36.289 14:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:36.289 14:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84553' 00:18:36.289 14:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84553 00:18:36.289 [2024-11-20 14:29:15.179539] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:36.289 14:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84553 00:18:36.289 [2024-11-20 14:29:15.179657] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:36.289 [2024-11-20 14:29:15.179755] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:36.289 [2024-11-20 14:29:15.179775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:36.856 [2024-11-20 14:29:15.541529] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:37.794 14:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:37.794 00:18:37.794 real 0m9.553s 00:18:37.794 user 0m15.664s 00:18:37.794 sys 0m1.433s 00:18:37.794 14:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:37.794 14:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.794 ************************************ 00:18:37.794 END TEST raid5f_superblock_test 00:18:37.794 ************************************ 00:18:37.794 14:29:16 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:18:37.794 14:29:16 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:18:37.794 14:29:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:37.794 14:29:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:37.794 14:29:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:37.794 ************************************ 00:18:37.794 START TEST raid5f_rebuild_test 00:18:37.794 ************************************ 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85051 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85051 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85051 ']' 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.794 14:29:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.794 [2024-11-20 14:29:16.759805] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:18:37.794 [2024-11-20 14:29:16.760205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85051 ] 00:18:37.794 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:37.794 Zero copy mechanism will not be used. 00:18:38.052 [2024-11-20 14:29:16.934182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.311 [2024-11-20 14:29:17.066819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.311 [2024-11-20 14:29:17.276389] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.311 [2024-11-20 14:29:17.276757] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.879 14:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.879 14:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:18:38.879 14:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:38.879 14:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:38.879 14:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.879 14:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.879 BaseBdev1_malloc 00:18:38.879 14:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.879 14:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:38.879 14:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.879 14:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.879 [2024-11-20 14:29:17.853130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:38.879 [2024-11-20 14:29:17.853208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.879 [2024-11-20 14:29:17.853239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:38.879 [2024-11-20 14:29:17.853257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.879 [2024-11-20 14:29:17.856063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.879 [2024-11-20 14:29:17.856115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:38.879 BaseBdev1 00:18:38.879 14:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.879 14:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:38.879 14:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:38.879 14:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.879 14:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.139 BaseBdev2_malloc 00:18:39.139 14:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.139 14:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:39.139 14:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.139 14:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.139 [2024-11-20 14:29:17.905149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:39.139 [2024-11-20 14:29:17.905370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.139 [2024-11-20 14:29:17.905454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:39.139 [2024-11-20 14:29:17.905571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.139 [2024-11-20 14:29:17.908350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.139 [2024-11-20 14:29:17.908402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:39.139 BaseBdev2 00:18:39.139 14:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.139 14:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:39.139 14:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:39.139 14:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.139 14:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.139 BaseBdev3_malloc 00:18:39.139 14:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.139 14:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:39.139 14:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.139 14:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.139 [2024-11-20 14:29:17.971789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:39.139 [2024-11-20 14:29:17.972046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.139 [2024-11-20 14:29:17.972089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:39.139 [2024-11-20 14:29:17.972109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.139 [2024-11-20 14:29:17.974858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.139 BaseBdev3 00:18:39.139 [2024-11-20 14:29:17.975039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:39.139 14:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.139 14:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:39.139 14:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:39.139 14:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.139 14:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.139 BaseBdev4_malloc 00:18:39.139 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.139 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:39.139 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.139 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.139 [2024-11-20 14:29:18.020951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:39.139 [2024-11-20 14:29:18.021186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.139 [2024-11-20 14:29:18.021260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:39.139 [2024-11-20 14:29:18.021447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.139 [2024-11-20 14:29:18.024189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.139 [2024-11-20 14:29:18.024364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:39.139 BaseBdev4 00:18:39.139 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.139 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:39.139 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.139 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.139 spare_malloc 00:18:39.139 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.139 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:39.139 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.139 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.139 spare_delay 00:18:39.139 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.139 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:39.139 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.139 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.139 [2024-11-20 14:29:18.081117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:39.139 [2024-11-20 14:29:18.081320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.139 [2024-11-20 14:29:18.081391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:39.139 [2024-11-20 14:29:18.081527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.139 [2024-11-20 14:29:18.084391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.139 [2024-11-20 14:29:18.084622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:39.139 spare 00:18:39.139 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.139 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:39.139 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.139 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.139 [2024-11-20 14:29:18.089381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:39.139 [2024-11-20 14:29:18.092130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:39.139 [2024-11-20 14:29:18.092344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:39.139 [2024-11-20 14:29:18.092443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:39.139 [2024-11-20 14:29:18.092578] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:39.139 [2024-11-20 14:29:18.092600] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:39.139 [2024-11-20 14:29:18.092928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:39.139 [2024-11-20 14:29:18.099890] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:39.139 [2024-11-20 14:29:18.099915] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:39.139 [2024-11-20 14:29:18.100210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.139 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.139 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:39.139 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.139 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.140 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:39.140 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:39.140 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:39.140 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.140 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.140 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.140 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.140 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.140 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.140 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.140 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.460 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.460 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.460 "name": "raid_bdev1", 00:18:39.460 "uuid": "2bfd8604-aeb9-44a6-b5c8-a218e5a29cb6", 00:18:39.460 "strip_size_kb": 64, 00:18:39.460 "state": "online", 00:18:39.460 "raid_level": "raid5f", 00:18:39.460 "superblock": false, 00:18:39.460 "num_base_bdevs": 4, 00:18:39.460 "num_base_bdevs_discovered": 4, 00:18:39.460 "num_base_bdevs_operational": 4, 00:18:39.460 "base_bdevs_list": [ 00:18:39.460 { 00:18:39.460 "name": "BaseBdev1", 00:18:39.460 "uuid": "0cb1177a-5480-53e5-8f2b-b0258afbaf13", 00:18:39.460 "is_configured": true, 00:18:39.460 "data_offset": 0, 00:18:39.460 "data_size": 65536 00:18:39.460 }, 00:18:39.460 { 00:18:39.460 "name": "BaseBdev2", 00:18:39.460 "uuid": "29a13d2a-89b5-5b52-ba30-255040e63fd4", 00:18:39.460 "is_configured": true, 00:18:39.460 "data_offset": 0, 00:18:39.460 "data_size": 65536 00:18:39.460 }, 00:18:39.460 { 00:18:39.460 "name": "BaseBdev3", 00:18:39.460 "uuid": "8296c1ea-8f7c-5093-9919-fbe20f125f9d", 00:18:39.460 "is_configured": true, 00:18:39.460 "data_offset": 0, 00:18:39.460 "data_size": 65536 00:18:39.460 }, 00:18:39.460 { 00:18:39.460 "name": "BaseBdev4", 00:18:39.460 "uuid": "53e44be0-f399-503a-8f7b-16270fa02567", 00:18:39.460 "is_configured": true, 00:18:39.460 "data_offset": 0, 00:18:39.460 "data_size": 65536 00:18:39.460 } 00:18:39.460 ] 00:18:39.460 }' 00:18:39.460 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.460 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.719 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:39.719 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:39.719 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.719 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.719 [2024-11-20 14:29:18.632352] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:39.719 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.719 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:18:39.719 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:39.719 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.719 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.719 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.719 14:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.979 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:39.979 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:39.979 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:39.979 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:39.979 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:39.979 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:39.979 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:39.979 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:39.979 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:39.979 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:39.979 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:39.979 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:39.979 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:39.979 14:29:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:40.239 [2024-11-20 14:29:19.044273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:40.239 /dev/nbd0 00:18:40.239 14:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:40.239 14:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:40.239 14:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:40.239 14:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:40.239 14:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:40.239 14:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:40.239 14:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:40.239 14:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:40.239 14:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:40.239 14:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:40.239 14:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:40.239 1+0 records in 00:18:40.239 1+0 records out 00:18:40.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294951 s, 13.9 MB/s 00:18:40.239 14:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:40.239 14:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:40.239 14:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:40.239 14:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:40.239 14:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:40.239 14:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:40.239 14:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:40.239 14:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:40.239 14:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:40.239 14:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:40.239 14:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:18:40.806 512+0 records in 00:18:40.806 512+0 records out 00:18:40.806 100663296 bytes (101 MB, 96 MiB) copied, 0.619599 s, 162 MB/s 00:18:40.806 14:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:40.806 14:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:40.806 14:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:40.806 14:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:40.806 14:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:40.806 14:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:40.806 14:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:41.374 [2024-11-20 14:29:20.089736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.374 [2024-11-20 14:29:20.105840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.374 "name": "raid_bdev1", 00:18:41.374 "uuid": "2bfd8604-aeb9-44a6-b5c8-a218e5a29cb6", 00:18:41.374 "strip_size_kb": 64, 00:18:41.374 "state": "online", 00:18:41.374 "raid_level": "raid5f", 00:18:41.374 "superblock": false, 00:18:41.374 "num_base_bdevs": 4, 00:18:41.374 "num_base_bdevs_discovered": 3, 00:18:41.374 "num_base_bdevs_operational": 3, 00:18:41.374 "base_bdevs_list": [ 00:18:41.374 { 00:18:41.374 "name": null, 00:18:41.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.374 "is_configured": false, 00:18:41.374 "data_offset": 0, 00:18:41.374 "data_size": 65536 00:18:41.374 }, 00:18:41.374 { 00:18:41.374 "name": "BaseBdev2", 00:18:41.374 "uuid": "29a13d2a-89b5-5b52-ba30-255040e63fd4", 00:18:41.374 "is_configured": true, 00:18:41.374 "data_offset": 0, 00:18:41.374 "data_size": 65536 00:18:41.374 }, 00:18:41.374 { 00:18:41.374 "name": "BaseBdev3", 00:18:41.374 "uuid": "8296c1ea-8f7c-5093-9919-fbe20f125f9d", 00:18:41.374 "is_configured": true, 00:18:41.374 "data_offset": 0, 00:18:41.374 "data_size": 65536 00:18:41.374 }, 00:18:41.374 { 00:18:41.374 "name": "BaseBdev4", 00:18:41.374 "uuid": "53e44be0-f399-503a-8f7b-16270fa02567", 00:18:41.374 "is_configured": true, 00:18:41.374 "data_offset": 0, 00:18:41.374 "data_size": 65536 00:18:41.374 } 00:18:41.374 ] 00:18:41.374 }' 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.374 14:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.942 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:41.942 14:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.942 14:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.942 [2024-11-20 14:29:20.622005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:41.942 [2024-11-20 14:29:20.636465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:18:41.942 14:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.942 14:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:41.942 [2024-11-20 14:29:20.645503] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:42.878 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.878 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.878 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.878 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.878 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.878 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.878 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.878 14:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.878 14:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.878 14:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.878 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.878 "name": "raid_bdev1", 00:18:42.878 "uuid": "2bfd8604-aeb9-44a6-b5c8-a218e5a29cb6", 00:18:42.878 "strip_size_kb": 64, 00:18:42.878 "state": "online", 00:18:42.878 "raid_level": "raid5f", 00:18:42.878 "superblock": false, 00:18:42.878 "num_base_bdevs": 4, 00:18:42.878 "num_base_bdevs_discovered": 4, 00:18:42.878 "num_base_bdevs_operational": 4, 00:18:42.878 "process": { 00:18:42.878 "type": "rebuild", 00:18:42.878 "target": "spare", 00:18:42.878 "progress": { 00:18:42.878 "blocks": 17280, 00:18:42.878 "percent": 8 00:18:42.878 } 00:18:42.878 }, 00:18:42.878 "base_bdevs_list": [ 00:18:42.878 { 00:18:42.878 "name": "spare", 00:18:42.878 "uuid": "d69de2c4-1822-54bf-af3d-470e3ab68bbc", 00:18:42.878 "is_configured": true, 00:18:42.878 "data_offset": 0, 00:18:42.878 "data_size": 65536 00:18:42.878 }, 00:18:42.878 { 00:18:42.878 "name": "BaseBdev2", 00:18:42.878 "uuid": "29a13d2a-89b5-5b52-ba30-255040e63fd4", 00:18:42.878 "is_configured": true, 00:18:42.878 "data_offset": 0, 00:18:42.878 "data_size": 65536 00:18:42.878 }, 00:18:42.878 { 00:18:42.878 "name": "BaseBdev3", 00:18:42.878 "uuid": "8296c1ea-8f7c-5093-9919-fbe20f125f9d", 00:18:42.878 "is_configured": true, 00:18:42.878 "data_offset": 0, 00:18:42.878 "data_size": 65536 00:18:42.878 }, 00:18:42.878 { 00:18:42.878 "name": "BaseBdev4", 00:18:42.878 "uuid": "53e44be0-f399-503a-8f7b-16270fa02567", 00:18:42.878 "is_configured": true, 00:18:42.878 "data_offset": 0, 00:18:42.878 "data_size": 65536 00:18:42.878 } 00:18:42.878 ] 00:18:42.878 }' 00:18:42.879 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.879 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.879 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.879 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.879 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:42.879 14:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.879 14:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.879 [2024-11-20 14:29:21.814807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:42.879 [2024-11-20 14:29:21.858970] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:42.879 [2024-11-20 14:29:21.859270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.137 [2024-11-20 14:29:21.859433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:43.137 [2024-11-20 14:29:21.859496] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:43.137 14:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.137 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:43.137 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.137 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.137 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:43.137 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:43.137 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:43.137 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.137 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.138 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.138 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.138 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.138 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.138 14:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.138 14:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.138 14:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.138 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.138 "name": "raid_bdev1", 00:18:43.138 "uuid": "2bfd8604-aeb9-44a6-b5c8-a218e5a29cb6", 00:18:43.138 "strip_size_kb": 64, 00:18:43.138 "state": "online", 00:18:43.138 "raid_level": "raid5f", 00:18:43.138 "superblock": false, 00:18:43.138 "num_base_bdevs": 4, 00:18:43.138 "num_base_bdevs_discovered": 3, 00:18:43.138 "num_base_bdevs_operational": 3, 00:18:43.138 "base_bdevs_list": [ 00:18:43.138 { 00:18:43.138 "name": null, 00:18:43.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.138 "is_configured": false, 00:18:43.138 "data_offset": 0, 00:18:43.138 "data_size": 65536 00:18:43.138 }, 00:18:43.138 { 00:18:43.138 "name": "BaseBdev2", 00:18:43.138 "uuid": "29a13d2a-89b5-5b52-ba30-255040e63fd4", 00:18:43.138 "is_configured": true, 00:18:43.138 "data_offset": 0, 00:18:43.138 "data_size": 65536 00:18:43.138 }, 00:18:43.138 { 00:18:43.138 "name": "BaseBdev3", 00:18:43.138 "uuid": "8296c1ea-8f7c-5093-9919-fbe20f125f9d", 00:18:43.138 "is_configured": true, 00:18:43.138 "data_offset": 0, 00:18:43.138 "data_size": 65536 00:18:43.138 }, 00:18:43.138 { 00:18:43.138 "name": "BaseBdev4", 00:18:43.138 "uuid": "53e44be0-f399-503a-8f7b-16270fa02567", 00:18:43.138 "is_configured": true, 00:18:43.138 "data_offset": 0, 00:18:43.138 "data_size": 65536 00:18:43.138 } 00:18:43.138 ] 00:18:43.138 }' 00:18:43.138 14:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.138 14:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.706 14:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:43.706 14:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.706 14:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:43.706 14:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:43.706 14:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.706 14:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.706 14:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.706 14:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.706 14:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.706 14:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.706 14:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.706 "name": "raid_bdev1", 00:18:43.706 "uuid": "2bfd8604-aeb9-44a6-b5c8-a218e5a29cb6", 00:18:43.706 "strip_size_kb": 64, 00:18:43.706 "state": "online", 00:18:43.706 "raid_level": "raid5f", 00:18:43.706 "superblock": false, 00:18:43.706 "num_base_bdevs": 4, 00:18:43.706 "num_base_bdevs_discovered": 3, 00:18:43.706 "num_base_bdevs_operational": 3, 00:18:43.706 "base_bdevs_list": [ 00:18:43.706 { 00:18:43.706 "name": null, 00:18:43.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.706 "is_configured": false, 00:18:43.706 "data_offset": 0, 00:18:43.706 "data_size": 65536 00:18:43.706 }, 00:18:43.706 { 00:18:43.706 "name": "BaseBdev2", 00:18:43.706 "uuid": "29a13d2a-89b5-5b52-ba30-255040e63fd4", 00:18:43.706 "is_configured": true, 00:18:43.706 "data_offset": 0, 00:18:43.706 "data_size": 65536 00:18:43.706 }, 00:18:43.706 { 00:18:43.706 "name": "BaseBdev3", 00:18:43.706 "uuid": "8296c1ea-8f7c-5093-9919-fbe20f125f9d", 00:18:43.706 "is_configured": true, 00:18:43.706 "data_offset": 0, 00:18:43.706 "data_size": 65536 00:18:43.706 }, 00:18:43.706 { 00:18:43.706 "name": "BaseBdev4", 00:18:43.706 "uuid": "53e44be0-f399-503a-8f7b-16270fa02567", 00:18:43.706 "is_configured": true, 00:18:43.706 "data_offset": 0, 00:18:43.706 "data_size": 65536 00:18:43.706 } 00:18:43.706 ] 00:18:43.706 }' 00:18:43.706 14:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.706 14:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:43.706 14:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.706 14:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:43.706 14:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:43.706 14:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.706 14:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.706 [2024-11-20 14:29:22.604260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:43.706 [2024-11-20 14:29:22.618036] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:18:43.706 14:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.706 14:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:43.706 [2024-11-20 14:29:22.627166] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:44.727 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.727 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.727 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:44.727 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:44.727 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.727 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.727 14:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.727 14:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.727 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.727 14:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.002 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.002 "name": "raid_bdev1", 00:18:45.002 "uuid": "2bfd8604-aeb9-44a6-b5c8-a218e5a29cb6", 00:18:45.002 "strip_size_kb": 64, 00:18:45.002 "state": "online", 00:18:45.002 "raid_level": "raid5f", 00:18:45.002 "superblock": false, 00:18:45.002 "num_base_bdevs": 4, 00:18:45.002 "num_base_bdevs_discovered": 4, 00:18:45.002 "num_base_bdevs_operational": 4, 00:18:45.002 "process": { 00:18:45.002 "type": "rebuild", 00:18:45.002 "target": "spare", 00:18:45.002 "progress": { 00:18:45.002 "blocks": 17280, 00:18:45.002 "percent": 8 00:18:45.002 } 00:18:45.002 }, 00:18:45.002 "base_bdevs_list": [ 00:18:45.002 { 00:18:45.002 "name": "spare", 00:18:45.002 "uuid": "d69de2c4-1822-54bf-af3d-470e3ab68bbc", 00:18:45.002 "is_configured": true, 00:18:45.002 "data_offset": 0, 00:18:45.002 "data_size": 65536 00:18:45.002 }, 00:18:45.002 { 00:18:45.002 "name": "BaseBdev2", 00:18:45.002 "uuid": "29a13d2a-89b5-5b52-ba30-255040e63fd4", 00:18:45.002 "is_configured": true, 00:18:45.002 "data_offset": 0, 00:18:45.002 "data_size": 65536 00:18:45.002 }, 00:18:45.002 { 00:18:45.002 "name": "BaseBdev3", 00:18:45.002 "uuid": "8296c1ea-8f7c-5093-9919-fbe20f125f9d", 00:18:45.002 "is_configured": true, 00:18:45.002 "data_offset": 0, 00:18:45.002 "data_size": 65536 00:18:45.002 }, 00:18:45.002 { 00:18:45.002 "name": "BaseBdev4", 00:18:45.002 "uuid": "53e44be0-f399-503a-8f7b-16270fa02567", 00:18:45.002 "is_configured": true, 00:18:45.002 "data_offset": 0, 00:18:45.002 "data_size": 65536 00:18:45.002 } 00:18:45.002 ] 00:18:45.002 }' 00:18:45.002 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.002 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:45.002 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.002 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:45.002 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:45.002 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:45.002 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:45.002 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=670 00:18:45.002 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:45.002 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:45.002 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.002 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:45.002 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:45.002 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.002 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.002 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.002 14:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.002 14:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.002 14:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.002 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.002 "name": "raid_bdev1", 00:18:45.002 "uuid": "2bfd8604-aeb9-44a6-b5c8-a218e5a29cb6", 00:18:45.002 "strip_size_kb": 64, 00:18:45.002 "state": "online", 00:18:45.002 "raid_level": "raid5f", 00:18:45.002 "superblock": false, 00:18:45.002 "num_base_bdevs": 4, 00:18:45.002 "num_base_bdevs_discovered": 4, 00:18:45.002 "num_base_bdevs_operational": 4, 00:18:45.002 "process": { 00:18:45.002 "type": "rebuild", 00:18:45.002 "target": "spare", 00:18:45.002 "progress": { 00:18:45.002 "blocks": 21120, 00:18:45.002 "percent": 10 00:18:45.002 } 00:18:45.002 }, 00:18:45.002 "base_bdevs_list": [ 00:18:45.002 { 00:18:45.002 "name": "spare", 00:18:45.002 "uuid": "d69de2c4-1822-54bf-af3d-470e3ab68bbc", 00:18:45.002 "is_configured": true, 00:18:45.002 "data_offset": 0, 00:18:45.002 "data_size": 65536 00:18:45.002 }, 00:18:45.002 { 00:18:45.002 "name": "BaseBdev2", 00:18:45.003 "uuid": "29a13d2a-89b5-5b52-ba30-255040e63fd4", 00:18:45.003 "is_configured": true, 00:18:45.003 "data_offset": 0, 00:18:45.003 "data_size": 65536 00:18:45.003 }, 00:18:45.003 { 00:18:45.003 "name": "BaseBdev3", 00:18:45.003 "uuid": "8296c1ea-8f7c-5093-9919-fbe20f125f9d", 00:18:45.003 "is_configured": true, 00:18:45.003 "data_offset": 0, 00:18:45.003 "data_size": 65536 00:18:45.003 }, 00:18:45.003 { 00:18:45.003 "name": "BaseBdev4", 00:18:45.003 "uuid": "53e44be0-f399-503a-8f7b-16270fa02567", 00:18:45.003 "is_configured": true, 00:18:45.003 "data_offset": 0, 00:18:45.003 "data_size": 65536 00:18:45.003 } 00:18:45.003 ] 00:18:45.003 }' 00:18:45.003 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.003 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:45.003 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.003 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:45.003 14:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:46.382 14:29:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:46.382 14:29:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:46.382 14:29:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.382 14:29:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:46.382 14:29:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:46.382 14:29:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.382 14:29:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.382 14:29:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.382 14:29:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.382 14:29:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.382 14:29:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.382 14:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.382 "name": "raid_bdev1", 00:18:46.382 "uuid": "2bfd8604-aeb9-44a6-b5c8-a218e5a29cb6", 00:18:46.382 "strip_size_kb": 64, 00:18:46.382 "state": "online", 00:18:46.382 "raid_level": "raid5f", 00:18:46.382 "superblock": false, 00:18:46.382 "num_base_bdevs": 4, 00:18:46.382 "num_base_bdevs_discovered": 4, 00:18:46.382 "num_base_bdevs_operational": 4, 00:18:46.382 "process": { 00:18:46.382 "type": "rebuild", 00:18:46.382 "target": "spare", 00:18:46.382 "progress": { 00:18:46.382 "blocks": 44160, 00:18:46.382 "percent": 22 00:18:46.382 } 00:18:46.382 }, 00:18:46.382 "base_bdevs_list": [ 00:18:46.382 { 00:18:46.382 "name": "spare", 00:18:46.382 "uuid": "d69de2c4-1822-54bf-af3d-470e3ab68bbc", 00:18:46.382 "is_configured": true, 00:18:46.382 "data_offset": 0, 00:18:46.382 "data_size": 65536 00:18:46.382 }, 00:18:46.382 { 00:18:46.382 "name": "BaseBdev2", 00:18:46.382 "uuid": "29a13d2a-89b5-5b52-ba30-255040e63fd4", 00:18:46.382 "is_configured": true, 00:18:46.382 "data_offset": 0, 00:18:46.382 "data_size": 65536 00:18:46.382 }, 00:18:46.382 { 00:18:46.382 "name": "BaseBdev3", 00:18:46.382 "uuid": "8296c1ea-8f7c-5093-9919-fbe20f125f9d", 00:18:46.382 "is_configured": true, 00:18:46.382 "data_offset": 0, 00:18:46.382 "data_size": 65536 00:18:46.382 }, 00:18:46.382 { 00:18:46.382 "name": "BaseBdev4", 00:18:46.382 "uuid": "53e44be0-f399-503a-8f7b-16270fa02567", 00:18:46.382 "is_configured": true, 00:18:46.382 "data_offset": 0, 00:18:46.382 "data_size": 65536 00:18:46.382 } 00:18:46.382 ] 00:18:46.382 }' 00:18:46.382 14:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.382 14:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:46.382 14:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:46.382 14:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:46.382 14:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:47.317 14:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:47.317 14:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:47.317 14:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.317 14:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:47.317 14:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:47.317 14:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.317 14:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.317 14:29:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.317 14:29:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.317 14:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.317 14:29:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.318 14:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.318 "name": "raid_bdev1", 00:18:47.318 "uuid": "2bfd8604-aeb9-44a6-b5c8-a218e5a29cb6", 00:18:47.318 "strip_size_kb": 64, 00:18:47.318 "state": "online", 00:18:47.318 "raid_level": "raid5f", 00:18:47.318 "superblock": false, 00:18:47.318 "num_base_bdevs": 4, 00:18:47.318 "num_base_bdevs_discovered": 4, 00:18:47.318 "num_base_bdevs_operational": 4, 00:18:47.318 "process": { 00:18:47.318 "type": "rebuild", 00:18:47.318 "target": "spare", 00:18:47.318 "progress": { 00:18:47.318 "blocks": 65280, 00:18:47.318 "percent": 33 00:18:47.318 } 00:18:47.318 }, 00:18:47.318 "base_bdevs_list": [ 00:18:47.318 { 00:18:47.318 "name": "spare", 00:18:47.318 "uuid": "d69de2c4-1822-54bf-af3d-470e3ab68bbc", 00:18:47.318 "is_configured": true, 00:18:47.318 "data_offset": 0, 00:18:47.318 "data_size": 65536 00:18:47.318 }, 00:18:47.318 { 00:18:47.318 "name": "BaseBdev2", 00:18:47.318 "uuid": "29a13d2a-89b5-5b52-ba30-255040e63fd4", 00:18:47.318 "is_configured": true, 00:18:47.318 "data_offset": 0, 00:18:47.318 "data_size": 65536 00:18:47.318 }, 00:18:47.318 { 00:18:47.318 "name": "BaseBdev3", 00:18:47.318 "uuid": "8296c1ea-8f7c-5093-9919-fbe20f125f9d", 00:18:47.318 "is_configured": true, 00:18:47.318 "data_offset": 0, 00:18:47.318 "data_size": 65536 00:18:47.318 }, 00:18:47.318 { 00:18:47.318 "name": "BaseBdev4", 00:18:47.318 "uuid": "53e44be0-f399-503a-8f7b-16270fa02567", 00:18:47.318 "is_configured": true, 00:18:47.318 "data_offset": 0, 00:18:47.318 "data_size": 65536 00:18:47.318 } 00:18:47.318 ] 00:18:47.318 }' 00:18:47.318 14:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.318 14:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:47.318 14:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.318 14:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:47.318 14:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:48.694 14:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:48.694 14:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:48.694 14:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.694 14:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:48.694 14:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:48.694 14:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.694 14:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.694 14:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.694 14:29:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.694 14:29:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.694 14:29:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.694 14:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:48.694 "name": "raid_bdev1", 00:18:48.694 "uuid": "2bfd8604-aeb9-44a6-b5c8-a218e5a29cb6", 00:18:48.694 "strip_size_kb": 64, 00:18:48.694 "state": "online", 00:18:48.694 "raid_level": "raid5f", 00:18:48.694 "superblock": false, 00:18:48.694 "num_base_bdevs": 4, 00:18:48.694 "num_base_bdevs_discovered": 4, 00:18:48.694 "num_base_bdevs_operational": 4, 00:18:48.694 "process": { 00:18:48.694 "type": "rebuild", 00:18:48.694 "target": "spare", 00:18:48.694 "progress": { 00:18:48.694 "blocks": 88320, 00:18:48.694 "percent": 44 00:18:48.694 } 00:18:48.694 }, 00:18:48.694 "base_bdevs_list": [ 00:18:48.694 { 00:18:48.694 "name": "spare", 00:18:48.694 "uuid": "d69de2c4-1822-54bf-af3d-470e3ab68bbc", 00:18:48.694 "is_configured": true, 00:18:48.694 "data_offset": 0, 00:18:48.694 "data_size": 65536 00:18:48.694 }, 00:18:48.694 { 00:18:48.694 "name": "BaseBdev2", 00:18:48.694 "uuid": "29a13d2a-89b5-5b52-ba30-255040e63fd4", 00:18:48.694 "is_configured": true, 00:18:48.694 "data_offset": 0, 00:18:48.694 "data_size": 65536 00:18:48.694 }, 00:18:48.694 { 00:18:48.694 "name": "BaseBdev3", 00:18:48.694 "uuid": "8296c1ea-8f7c-5093-9919-fbe20f125f9d", 00:18:48.694 "is_configured": true, 00:18:48.694 "data_offset": 0, 00:18:48.694 "data_size": 65536 00:18:48.694 }, 00:18:48.694 { 00:18:48.694 "name": "BaseBdev4", 00:18:48.694 "uuid": "53e44be0-f399-503a-8f7b-16270fa02567", 00:18:48.694 "is_configured": true, 00:18:48.694 "data_offset": 0, 00:18:48.694 "data_size": 65536 00:18:48.694 } 00:18:48.694 ] 00:18:48.694 }' 00:18:48.694 14:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:48.694 14:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:48.694 14:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:48.694 14:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:48.694 14:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:49.630 14:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:49.630 14:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.630 14:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.630 14:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:49.630 14:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:49.630 14:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.630 14:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.630 14:29:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.630 14:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.630 14:29:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.630 14:29:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.630 14:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.630 "name": "raid_bdev1", 00:18:49.630 "uuid": "2bfd8604-aeb9-44a6-b5c8-a218e5a29cb6", 00:18:49.630 "strip_size_kb": 64, 00:18:49.630 "state": "online", 00:18:49.630 "raid_level": "raid5f", 00:18:49.630 "superblock": false, 00:18:49.630 "num_base_bdevs": 4, 00:18:49.630 "num_base_bdevs_discovered": 4, 00:18:49.630 "num_base_bdevs_operational": 4, 00:18:49.630 "process": { 00:18:49.630 "type": "rebuild", 00:18:49.630 "target": "spare", 00:18:49.630 "progress": { 00:18:49.630 "blocks": 109440, 00:18:49.630 "percent": 55 00:18:49.630 } 00:18:49.630 }, 00:18:49.631 "base_bdevs_list": [ 00:18:49.631 { 00:18:49.631 "name": "spare", 00:18:49.631 "uuid": "d69de2c4-1822-54bf-af3d-470e3ab68bbc", 00:18:49.631 "is_configured": true, 00:18:49.631 "data_offset": 0, 00:18:49.631 "data_size": 65536 00:18:49.631 }, 00:18:49.631 { 00:18:49.631 "name": "BaseBdev2", 00:18:49.631 "uuid": "29a13d2a-89b5-5b52-ba30-255040e63fd4", 00:18:49.631 "is_configured": true, 00:18:49.631 "data_offset": 0, 00:18:49.631 "data_size": 65536 00:18:49.631 }, 00:18:49.631 { 00:18:49.631 "name": "BaseBdev3", 00:18:49.631 "uuid": "8296c1ea-8f7c-5093-9919-fbe20f125f9d", 00:18:49.631 "is_configured": true, 00:18:49.631 "data_offset": 0, 00:18:49.631 "data_size": 65536 00:18:49.631 }, 00:18:49.631 { 00:18:49.631 "name": "BaseBdev4", 00:18:49.631 "uuid": "53e44be0-f399-503a-8f7b-16270fa02567", 00:18:49.631 "is_configured": true, 00:18:49.631 "data_offset": 0, 00:18:49.631 "data_size": 65536 00:18:49.631 } 00:18:49.631 ] 00:18:49.631 }' 00:18:49.631 14:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.631 14:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:49.631 14:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.631 14:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.631 14:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:51.007 14:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:51.007 14:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.007 14:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.007 14:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.007 14:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.007 14:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.007 14:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.007 14:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.007 14:29:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.007 14:29:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.008 14:29:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.008 14:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.008 "name": "raid_bdev1", 00:18:51.008 "uuid": "2bfd8604-aeb9-44a6-b5c8-a218e5a29cb6", 00:18:51.008 "strip_size_kb": 64, 00:18:51.008 "state": "online", 00:18:51.008 "raid_level": "raid5f", 00:18:51.008 "superblock": false, 00:18:51.008 "num_base_bdevs": 4, 00:18:51.008 "num_base_bdevs_discovered": 4, 00:18:51.008 "num_base_bdevs_operational": 4, 00:18:51.008 "process": { 00:18:51.008 "type": "rebuild", 00:18:51.008 "target": "spare", 00:18:51.008 "progress": { 00:18:51.008 "blocks": 132480, 00:18:51.008 "percent": 67 00:18:51.008 } 00:18:51.008 }, 00:18:51.008 "base_bdevs_list": [ 00:18:51.008 { 00:18:51.008 "name": "spare", 00:18:51.008 "uuid": "d69de2c4-1822-54bf-af3d-470e3ab68bbc", 00:18:51.008 "is_configured": true, 00:18:51.008 "data_offset": 0, 00:18:51.008 "data_size": 65536 00:18:51.008 }, 00:18:51.008 { 00:18:51.008 "name": "BaseBdev2", 00:18:51.008 "uuid": "29a13d2a-89b5-5b52-ba30-255040e63fd4", 00:18:51.008 "is_configured": true, 00:18:51.008 "data_offset": 0, 00:18:51.008 "data_size": 65536 00:18:51.008 }, 00:18:51.008 { 00:18:51.008 "name": "BaseBdev3", 00:18:51.008 "uuid": "8296c1ea-8f7c-5093-9919-fbe20f125f9d", 00:18:51.008 "is_configured": true, 00:18:51.008 "data_offset": 0, 00:18:51.008 "data_size": 65536 00:18:51.008 }, 00:18:51.008 { 00:18:51.008 "name": "BaseBdev4", 00:18:51.008 "uuid": "53e44be0-f399-503a-8f7b-16270fa02567", 00:18:51.008 "is_configured": true, 00:18:51.008 "data_offset": 0, 00:18:51.008 "data_size": 65536 00:18:51.008 } 00:18:51.008 ] 00:18:51.008 }' 00:18:51.008 14:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.008 14:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.008 14:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.008 14:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.008 14:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:51.941 14:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:51.941 14:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.941 14:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.941 14:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.941 14:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.941 14:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.941 14:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.941 14:29:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.941 14:29:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.941 14:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.941 14:29:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.941 14:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.941 "name": "raid_bdev1", 00:18:51.941 "uuid": "2bfd8604-aeb9-44a6-b5c8-a218e5a29cb6", 00:18:51.941 "strip_size_kb": 64, 00:18:51.941 "state": "online", 00:18:51.941 "raid_level": "raid5f", 00:18:51.941 "superblock": false, 00:18:51.941 "num_base_bdevs": 4, 00:18:51.941 "num_base_bdevs_discovered": 4, 00:18:51.941 "num_base_bdevs_operational": 4, 00:18:51.941 "process": { 00:18:51.941 "type": "rebuild", 00:18:51.941 "target": "spare", 00:18:51.941 "progress": { 00:18:51.941 "blocks": 153600, 00:18:51.941 "percent": 78 00:18:51.941 } 00:18:51.941 }, 00:18:51.941 "base_bdevs_list": [ 00:18:51.941 { 00:18:51.941 "name": "spare", 00:18:51.941 "uuid": "d69de2c4-1822-54bf-af3d-470e3ab68bbc", 00:18:51.941 "is_configured": true, 00:18:51.941 "data_offset": 0, 00:18:51.941 "data_size": 65536 00:18:51.941 }, 00:18:51.941 { 00:18:51.941 "name": "BaseBdev2", 00:18:51.941 "uuid": "29a13d2a-89b5-5b52-ba30-255040e63fd4", 00:18:51.941 "is_configured": true, 00:18:51.941 "data_offset": 0, 00:18:51.941 "data_size": 65536 00:18:51.941 }, 00:18:51.941 { 00:18:51.941 "name": "BaseBdev3", 00:18:51.941 "uuid": "8296c1ea-8f7c-5093-9919-fbe20f125f9d", 00:18:51.941 "is_configured": true, 00:18:51.941 "data_offset": 0, 00:18:51.941 "data_size": 65536 00:18:51.941 }, 00:18:51.941 { 00:18:51.941 "name": "BaseBdev4", 00:18:51.941 "uuid": "53e44be0-f399-503a-8f7b-16270fa02567", 00:18:51.941 "is_configured": true, 00:18:51.941 "data_offset": 0, 00:18:51.941 "data_size": 65536 00:18:51.941 } 00:18:51.941 ] 00:18:51.941 }' 00:18:51.941 14:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.941 14:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.941 14:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.200 14:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.200 14:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:53.135 14:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:53.135 14:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.135 14:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.135 14:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:53.135 14:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:53.135 14:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.135 14:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.135 14:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.135 14:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.135 14:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.135 14:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.135 14:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.135 "name": "raid_bdev1", 00:18:53.135 "uuid": "2bfd8604-aeb9-44a6-b5c8-a218e5a29cb6", 00:18:53.135 "strip_size_kb": 64, 00:18:53.135 "state": "online", 00:18:53.135 "raid_level": "raid5f", 00:18:53.135 "superblock": false, 00:18:53.135 "num_base_bdevs": 4, 00:18:53.135 "num_base_bdevs_discovered": 4, 00:18:53.135 "num_base_bdevs_operational": 4, 00:18:53.135 "process": { 00:18:53.135 "type": "rebuild", 00:18:53.135 "target": "spare", 00:18:53.135 "progress": { 00:18:53.135 "blocks": 176640, 00:18:53.135 "percent": 89 00:18:53.135 } 00:18:53.135 }, 00:18:53.135 "base_bdevs_list": [ 00:18:53.135 { 00:18:53.135 "name": "spare", 00:18:53.135 "uuid": "d69de2c4-1822-54bf-af3d-470e3ab68bbc", 00:18:53.135 "is_configured": true, 00:18:53.135 "data_offset": 0, 00:18:53.135 "data_size": 65536 00:18:53.135 }, 00:18:53.135 { 00:18:53.135 "name": "BaseBdev2", 00:18:53.135 "uuid": "29a13d2a-89b5-5b52-ba30-255040e63fd4", 00:18:53.135 "is_configured": true, 00:18:53.135 "data_offset": 0, 00:18:53.135 "data_size": 65536 00:18:53.135 }, 00:18:53.135 { 00:18:53.135 "name": "BaseBdev3", 00:18:53.135 "uuid": "8296c1ea-8f7c-5093-9919-fbe20f125f9d", 00:18:53.135 "is_configured": true, 00:18:53.135 "data_offset": 0, 00:18:53.135 "data_size": 65536 00:18:53.135 }, 00:18:53.135 { 00:18:53.135 "name": "BaseBdev4", 00:18:53.135 "uuid": "53e44be0-f399-503a-8f7b-16270fa02567", 00:18:53.135 "is_configured": true, 00:18:53.135 "data_offset": 0, 00:18:53.135 "data_size": 65536 00:18:53.135 } 00:18:53.135 ] 00:18:53.135 }' 00:18:53.135 14:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.135 14:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.135 14:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.394 14:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.394 14:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:54.331 [2024-11-20 14:29:33.032666] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:54.331 [2024-11-20 14:29:33.033017] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:54.331 [2024-11-20 14:29:33.033100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.331 "name": "raid_bdev1", 00:18:54.331 "uuid": "2bfd8604-aeb9-44a6-b5c8-a218e5a29cb6", 00:18:54.331 "strip_size_kb": 64, 00:18:54.331 "state": "online", 00:18:54.331 "raid_level": "raid5f", 00:18:54.331 "superblock": false, 00:18:54.331 "num_base_bdevs": 4, 00:18:54.331 "num_base_bdevs_discovered": 4, 00:18:54.331 "num_base_bdevs_operational": 4, 00:18:54.331 "base_bdevs_list": [ 00:18:54.331 { 00:18:54.331 "name": "spare", 00:18:54.331 "uuid": "d69de2c4-1822-54bf-af3d-470e3ab68bbc", 00:18:54.331 "is_configured": true, 00:18:54.331 "data_offset": 0, 00:18:54.331 "data_size": 65536 00:18:54.331 }, 00:18:54.331 { 00:18:54.331 "name": "BaseBdev2", 00:18:54.331 "uuid": "29a13d2a-89b5-5b52-ba30-255040e63fd4", 00:18:54.331 "is_configured": true, 00:18:54.331 "data_offset": 0, 00:18:54.331 "data_size": 65536 00:18:54.331 }, 00:18:54.331 { 00:18:54.331 "name": "BaseBdev3", 00:18:54.331 "uuid": "8296c1ea-8f7c-5093-9919-fbe20f125f9d", 00:18:54.331 "is_configured": true, 00:18:54.331 "data_offset": 0, 00:18:54.331 "data_size": 65536 00:18:54.331 }, 00:18:54.331 { 00:18:54.331 "name": "BaseBdev4", 00:18:54.331 "uuid": "53e44be0-f399-503a-8f7b-16270fa02567", 00:18:54.331 "is_configured": true, 00:18:54.331 "data_offset": 0, 00:18:54.331 "data_size": 65536 00:18:54.331 } 00:18:54.331 ] 00:18:54.331 }' 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.331 14:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.591 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.591 "name": "raid_bdev1", 00:18:54.591 "uuid": "2bfd8604-aeb9-44a6-b5c8-a218e5a29cb6", 00:18:54.591 "strip_size_kb": 64, 00:18:54.591 "state": "online", 00:18:54.591 "raid_level": "raid5f", 00:18:54.591 "superblock": false, 00:18:54.591 "num_base_bdevs": 4, 00:18:54.591 "num_base_bdevs_discovered": 4, 00:18:54.591 "num_base_bdevs_operational": 4, 00:18:54.591 "base_bdevs_list": [ 00:18:54.591 { 00:18:54.591 "name": "spare", 00:18:54.591 "uuid": "d69de2c4-1822-54bf-af3d-470e3ab68bbc", 00:18:54.591 "is_configured": true, 00:18:54.591 "data_offset": 0, 00:18:54.591 "data_size": 65536 00:18:54.591 }, 00:18:54.591 { 00:18:54.591 "name": "BaseBdev2", 00:18:54.591 "uuid": "29a13d2a-89b5-5b52-ba30-255040e63fd4", 00:18:54.591 "is_configured": true, 00:18:54.591 "data_offset": 0, 00:18:54.591 "data_size": 65536 00:18:54.591 }, 00:18:54.591 { 00:18:54.591 "name": "BaseBdev3", 00:18:54.591 "uuid": "8296c1ea-8f7c-5093-9919-fbe20f125f9d", 00:18:54.591 "is_configured": true, 00:18:54.591 "data_offset": 0, 00:18:54.591 "data_size": 65536 00:18:54.591 }, 00:18:54.591 { 00:18:54.591 "name": "BaseBdev4", 00:18:54.591 "uuid": "53e44be0-f399-503a-8f7b-16270fa02567", 00:18:54.591 "is_configured": true, 00:18:54.591 "data_offset": 0, 00:18:54.591 "data_size": 65536 00:18:54.591 } 00:18:54.591 ] 00:18:54.591 }' 00:18:54.591 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.591 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:54.591 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.591 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:54.591 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:54.591 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.591 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.591 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:54.591 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:54.591 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:54.591 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.591 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.591 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.591 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.591 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.591 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.591 14:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.591 14:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.591 14:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.591 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.591 "name": "raid_bdev1", 00:18:54.591 "uuid": "2bfd8604-aeb9-44a6-b5c8-a218e5a29cb6", 00:18:54.591 "strip_size_kb": 64, 00:18:54.591 "state": "online", 00:18:54.591 "raid_level": "raid5f", 00:18:54.591 "superblock": false, 00:18:54.592 "num_base_bdevs": 4, 00:18:54.592 "num_base_bdevs_discovered": 4, 00:18:54.592 "num_base_bdevs_operational": 4, 00:18:54.592 "base_bdevs_list": [ 00:18:54.592 { 00:18:54.592 "name": "spare", 00:18:54.592 "uuid": "d69de2c4-1822-54bf-af3d-470e3ab68bbc", 00:18:54.592 "is_configured": true, 00:18:54.592 "data_offset": 0, 00:18:54.592 "data_size": 65536 00:18:54.592 }, 00:18:54.592 { 00:18:54.592 "name": "BaseBdev2", 00:18:54.592 "uuid": "29a13d2a-89b5-5b52-ba30-255040e63fd4", 00:18:54.592 "is_configured": true, 00:18:54.592 "data_offset": 0, 00:18:54.592 "data_size": 65536 00:18:54.592 }, 00:18:54.592 { 00:18:54.592 "name": "BaseBdev3", 00:18:54.592 "uuid": "8296c1ea-8f7c-5093-9919-fbe20f125f9d", 00:18:54.592 "is_configured": true, 00:18:54.592 "data_offset": 0, 00:18:54.592 "data_size": 65536 00:18:54.592 }, 00:18:54.592 { 00:18:54.592 "name": "BaseBdev4", 00:18:54.592 "uuid": "53e44be0-f399-503a-8f7b-16270fa02567", 00:18:54.592 "is_configured": true, 00:18:54.592 "data_offset": 0, 00:18:54.592 "data_size": 65536 00:18:54.592 } 00:18:54.592 ] 00:18:54.592 }' 00:18:54.592 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.592 14:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.198 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:55.198 14:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.198 14:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.198 [2024-11-20 14:29:33.984786] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:55.198 [2024-11-20 14:29:33.985003] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:55.198 [2024-11-20 14:29:33.985133] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.198 [2024-11-20 14:29:33.985265] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:55.198 [2024-11-20 14:29:33.985284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:55.198 14:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.198 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.198 14:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:55.198 14:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.198 14:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.198 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.198 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:55.198 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:55.198 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:55.198 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:55.198 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:55.198 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:55.198 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:55.198 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:55.198 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:55.198 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:55.198 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:55.198 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:55.198 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:55.462 /dev/nbd0 00:18:55.462 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:55.462 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:55.462 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:55.462 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:55.462 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:55.462 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:55.462 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:55.462 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:55.463 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:55.463 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:55.463 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:55.463 1+0 records in 00:18:55.463 1+0 records out 00:18:55.463 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399277 s, 10.3 MB/s 00:18:55.463 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:55.463 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:55.463 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:55.463 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:55.463 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:55.463 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:55.463 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:55.463 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:56.031 /dev/nbd1 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:56.031 1+0 records in 00:18:56.031 1+0 records out 00:18:56.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000675717 s, 6.1 MB/s 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:56.031 14:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:56.289 14:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:56.547 14:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:56.548 14:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:56.548 14:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:56.548 14:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:56.548 14:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:56.548 14:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:56.548 14:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:56.548 14:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:56.548 14:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:56.806 14:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:56.806 14:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:56.806 14:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:56.806 14:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:56.806 14:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:56.806 14:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:56.806 14:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:56.806 14:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:56.806 14:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:56.806 14:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85051 00:18:56.806 14:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85051 ']' 00:18:56.806 14:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85051 00:18:56.806 14:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:18:56.806 14:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.806 14:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85051 00:18:56.806 killing process with pid 85051 00:18:56.806 Received shutdown signal, test time was about 60.000000 seconds 00:18:56.806 00:18:56.806 Latency(us) 00:18:56.806 [2024-11-20T14:29:35.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.807 [2024-11-20T14:29:35.789Z] =================================================================================================================== 00:18:56.807 [2024-11-20T14:29:35.789Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:56.807 14:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:56.807 14:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:56.807 14:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85051' 00:18:56.807 14:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85051 00:18:56.807 [2024-11-20 14:29:35.648370] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:56.807 14:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85051 00:18:57.375 [2024-11-20 14:29:36.104674] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:58.313 00:18:58.313 real 0m20.531s 00:18:58.313 user 0m25.768s 00:18:58.313 sys 0m2.319s 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.313 ************************************ 00:18:58.313 END TEST raid5f_rebuild_test 00:18:58.313 ************************************ 00:18:58.313 14:29:37 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:18:58.313 14:29:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:58.313 14:29:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.313 14:29:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.313 ************************************ 00:18:58.313 START TEST raid5f_rebuild_test_sb 00:18:58.313 ************************************ 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85561 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85561 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85561 ']' 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.313 14:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.572 [2024-11-20 14:29:37.365302] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:18:58.572 [2024-11-20 14:29:37.365949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85561 ] 00:18:58.572 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:58.572 Zero copy mechanism will not be used. 00:18:58.572 [2024-11-20 14:29:37.551677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.831 [2024-11-20 14:29:37.687819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.090 [2024-11-20 14:29:37.903867] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:59.090 [2024-11-20 14:29:37.904185] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:59.349 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.349 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.609 BaseBdev1_malloc 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.609 [2024-11-20 14:29:38.383243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:59.609 [2024-11-20 14:29:38.383332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.609 [2024-11-20 14:29:38.383400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:59.609 [2024-11-20 14:29:38.383422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.609 [2024-11-20 14:29:38.386320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.609 [2024-11-20 14:29:38.386372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:59.609 BaseBdev1 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.609 BaseBdev2_malloc 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.609 [2024-11-20 14:29:38.436809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:59.609 [2024-11-20 14:29:38.436932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.609 [2024-11-20 14:29:38.436965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:59.609 [2024-11-20 14:29:38.436982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.609 [2024-11-20 14:29:38.439852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.609 [2024-11-20 14:29:38.439907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:59.609 BaseBdev2 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.609 BaseBdev3_malloc 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.609 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.609 [2024-11-20 14:29:38.501148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:59.609 [2024-11-20 14:29:38.501237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.609 [2024-11-20 14:29:38.501268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:59.610 [2024-11-20 14:29:38.501286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.610 [2024-11-20 14:29:38.504076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.610 [2024-11-20 14:29:38.504131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:59.610 BaseBdev3 00:18:59.610 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.610 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:59.610 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:59.610 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.610 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.610 BaseBdev4_malloc 00:18:59.610 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.610 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:59.610 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.610 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.610 [2024-11-20 14:29:38.554798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:59.610 [2024-11-20 14:29:38.554877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.610 [2024-11-20 14:29:38.554922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:59.610 [2024-11-20 14:29:38.554940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.610 [2024-11-20 14:29:38.557869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.610 [2024-11-20 14:29:38.557955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:59.610 BaseBdev4 00:18:59.610 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.610 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:59.610 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.610 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.870 spare_malloc 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.870 spare_delay 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.870 [2024-11-20 14:29:38.616436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:59.870 [2024-11-20 14:29:38.616695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.870 [2024-11-20 14:29:38.616738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:59.870 [2024-11-20 14:29:38.616757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.870 [2024-11-20 14:29:38.619704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.870 [2024-11-20 14:29:38.619891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:59.870 spare 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.870 [2024-11-20 14:29:38.624580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:59.870 [2024-11-20 14:29:38.627180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:59.870 [2024-11-20 14:29:38.627269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:59.870 [2024-11-20 14:29:38.627393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:59.870 [2024-11-20 14:29:38.627670] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:59.870 [2024-11-20 14:29:38.627695] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:59.870 [2024-11-20 14:29:38.628105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:59.870 [2024-11-20 14:29:38.635193] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:59.870 [2024-11-20 14:29:38.635221] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:59.870 [2024-11-20 14:29:38.635498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.870 "name": "raid_bdev1", 00:18:59.870 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:18:59.870 "strip_size_kb": 64, 00:18:59.870 "state": "online", 00:18:59.870 "raid_level": "raid5f", 00:18:59.870 "superblock": true, 00:18:59.870 "num_base_bdevs": 4, 00:18:59.870 "num_base_bdevs_discovered": 4, 00:18:59.870 "num_base_bdevs_operational": 4, 00:18:59.870 "base_bdevs_list": [ 00:18:59.870 { 00:18:59.870 "name": "BaseBdev1", 00:18:59.870 "uuid": "3f9f4af0-4b12-5b8a-9fe8-1c7a2eec029c", 00:18:59.870 "is_configured": true, 00:18:59.870 "data_offset": 2048, 00:18:59.870 "data_size": 63488 00:18:59.870 }, 00:18:59.870 { 00:18:59.870 "name": "BaseBdev2", 00:18:59.870 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:18:59.870 "is_configured": true, 00:18:59.870 "data_offset": 2048, 00:18:59.870 "data_size": 63488 00:18:59.870 }, 00:18:59.870 { 00:18:59.870 "name": "BaseBdev3", 00:18:59.870 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:18:59.870 "is_configured": true, 00:18:59.870 "data_offset": 2048, 00:18:59.870 "data_size": 63488 00:18:59.870 }, 00:18:59.870 { 00:18:59.870 "name": "BaseBdev4", 00:18:59.870 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:18:59.870 "is_configured": true, 00:18:59.870 "data_offset": 2048, 00:18:59.870 "data_size": 63488 00:18:59.870 } 00:18:59.870 ] 00:18:59.870 }' 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.870 14:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.440 [2024-11-20 14:29:39.143486] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:00.440 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:00.698 [2024-11-20 14:29:39.547504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:00.698 /dev/nbd0 00:19:00.699 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:00.699 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:00.699 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:00.699 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:00.699 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:00.699 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:00.699 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:00.699 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:00.699 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:00.699 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:00.699 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:00.699 1+0 records in 00:19:00.699 1+0 records out 00:19:00.699 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462789 s, 8.9 MB/s 00:19:00.699 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:00.699 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:00.699 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:00.699 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:00.699 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:00.699 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:00.699 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:00.699 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:00.699 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:19:00.699 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:19:00.699 14:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:19:01.636 496+0 records in 00:19:01.636 496+0 records out 00:19:01.636 97517568 bytes (98 MB, 93 MiB) copied, 0.637973 s, 153 MB/s 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:01.636 [2024-11-20 14:29:40.546460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.636 [2024-11-20 14:29:40.559104] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.636 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.895 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.895 "name": "raid_bdev1", 00:19:01.895 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:01.895 "strip_size_kb": 64, 00:19:01.895 "state": "online", 00:19:01.895 "raid_level": "raid5f", 00:19:01.895 "superblock": true, 00:19:01.895 "num_base_bdevs": 4, 00:19:01.895 "num_base_bdevs_discovered": 3, 00:19:01.895 "num_base_bdevs_operational": 3, 00:19:01.895 "base_bdevs_list": [ 00:19:01.895 { 00:19:01.895 "name": null, 00:19:01.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.896 "is_configured": false, 00:19:01.896 "data_offset": 0, 00:19:01.896 "data_size": 63488 00:19:01.896 }, 00:19:01.896 { 00:19:01.896 "name": "BaseBdev2", 00:19:01.896 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:01.896 "is_configured": true, 00:19:01.896 "data_offset": 2048, 00:19:01.896 "data_size": 63488 00:19:01.896 }, 00:19:01.896 { 00:19:01.896 "name": "BaseBdev3", 00:19:01.896 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:01.896 "is_configured": true, 00:19:01.896 "data_offset": 2048, 00:19:01.896 "data_size": 63488 00:19:01.896 }, 00:19:01.896 { 00:19:01.896 "name": "BaseBdev4", 00:19:01.896 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:01.896 "is_configured": true, 00:19:01.896 "data_offset": 2048, 00:19:01.896 "data_size": 63488 00:19:01.896 } 00:19:01.896 ] 00:19:01.896 }' 00:19:01.896 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.896 14:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.154 14:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:02.154 14:29:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.154 14:29:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.154 [2024-11-20 14:29:41.067314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:02.154 [2024-11-20 14:29:41.082640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:19:02.154 14:29:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.154 14:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:02.154 [2024-11-20 14:29:41.092336] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:03.532 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:03.532 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:03.532 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:03.532 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:03.532 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:03.532 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.532 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.532 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.532 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.532 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.532 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.532 "name": "raid_bdev1", 00:19:03.532 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:03.532 "strip_size_kb": 64, 00:19:03.532 "state": "online", 00:19:03.532 "raid_level": "raid5f", 00:19:03.532 "superblock": true, 00:19:03.532 "num_base_bdevs": 4, 00:19:03.532 "num_base_bdevs_discovered": 4, 00:19:03.532 "num_base_bdevs_operational": 4, 00:19:03.532 "process": { 00:19:03.532 "type": "rebuild", 00:19:03.532 "target": "spare", 00:19:03.532 "progress": { 00:19:03.532 "blocks": 17280, 00:19:03.532 "percent": 9 00:19:03.532 } 00:19:03.532 }, 00:19:03.532 "base_bdevs_list": [ 00:19:03.532 { 00:19:03.532 "name": "spare", 00:19:03.532 "uuid": "9c67fbbe-30c1-5d0f-98cd-27a89115be47", 00:19:03.532 "is_configured": true, 00:19:03.532 "data_offset": 2048, 00:19:03.532 "data_size": 63488 00:19:03.532 }, 00:19:03.532 { 00:19:03.532 "name": "BaseBdev2", 00:19:03.532 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:03.532 "is_configured": true, 00:19:03.532 "data_offset": 2048, 00:19:03.532 "data_size": 63488 00:19:03.532 }, 00:19:03.532 { 00:19:03.532 "name": "BaseBdev3", 00:19:03.532 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:03.532 "is_configured": true, 00:19:03.532 "data_offset": 2048, 00:19:03.532 "data_size": 63488 00:19:03.532 }, 00:19:03.532 { 00:19:03.532 "name": "BaseBdev4", 00:19:03.532 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:03.532 "is_configured": true, 00:19:03.532 "data_offset": 2048, 00:19:03.532 "data_size": 63488 00:19:03.532 } 00:19:03.532 ] 00:19:03.532 }' 00:19:03.532 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.532 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:03.532 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.532 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:03.532 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:03.532 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.532 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.532 [2024-11-20 14:29:42.262889] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:03.532 [2024-11-20 14:29:42.307101] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:03.532 [2024-11-20 14:29:42.307244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.532 [2024-11-20 14:29:42.307284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:03.532 [2024-11-20 14:29:42.307309] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:03.532 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.532 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:03.532 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.532 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.532 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:03.533 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:03.533 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:03.533 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.533 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.533 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.533 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.533 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.533 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.533 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.533 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.533 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.533 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.533 "name": "raid_bdev1", 00:19:03.533 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:03.533 "strip_size_kb": 64, 00:19:03.533 "state": "online", 00:19:03.533 "raid_level": "raid5f", 00:19:03.533 "superblock": true, 00:19:03.533 "num_base_bdevs": 4, 00:19:03.533 "num_base_bdevs_discovered": 3, 00:19:03.533 "num_base_bdevs_operational": 3, 00:19:03.533 "base_bdevs_list": [ 00:19:03.533 { 00:19:03.533 "name": null, 00:19:03.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.533 "is_configured": false, 00:19:03.533 "data_offset": 0, 00:19:03.533 "data_size": 63488 00:19:03.533 }, 00:19:03.533 { 00:19:03.533 "name": "BaseBdev2", 00:19:03.533 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:03.533 "is_configured": true, 00:19:03.533 "data_offset": 2048, 00:19:03.533 "data_size": 63488 00:19:03.533 }, 00:19:03.533 { 00:19:03.533 "name": "BaseBdev3", 00:19:03.533 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:03.533 "is_configured": true, 00:19:03.533 "data_offset": 2048, 00:19:03.533 "data_size": 63488 00:19:03.533 }, 00:19:03.533 { 00:19:03.533 "name": "BaseBdev4", 00:19:03.533 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:03.533 "is_configured": true, 00:19:03.533 "data_offset": 2048, 00:19:03.533 "data_size": 63488 00:19:03.533 } 00:19:03.533 ] 00:19:03.533 }' 00:19:03.533 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.533 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.100 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:04.100 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:04.100 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:04.100 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:04.100 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:04.100 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.100 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.100 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.100 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.100 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.100 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:04.100 "name": "raid_bdev1", 00:19:04.100 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:04.100 "strip_size_kb": 64, 00:19:04.100 "state": "online", 00:19:04.100 "raid_level": "raid5f", 00:19:04.100 "superblock": true, 00:19:04.100 "num_base_bdevs": 4, 00:19:04.100 "num_base_bdevs_discovered": 3, 00:19:04.100 "num_base_bdevs_operational": 3, 00:19:04.100 "base_bdevs_list": [ 00:19:04.100 { 00:19:04.100 "name": null, 00:19:04.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.100 "is_configured": false, 00:19:04.100 "data_offset": 0, 00:19:04.100 "data_size": 63488 00:19:04.100 }, 00:19:04.100 { 00:19:04.100 "name": "BaseBdev2", 00:19:04.100 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:04.100 "is_configured": true, 00:19:04.100 "data_offset": 2048, 00:19:04.100 "data_size": 63488 00:19:04.100 }, 00:19:04.100 { 00:19:04.100 "name": "BaseBdev3", 00:19:04.100 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:04.100 "is_configured": true, 00:19:04.100 "data_offset": 2048, 00:19:04.100 "data_size": 63488 00:19:04.100 }, 00:19:04.100 { 00:19:04.100 "name": "BaseBdev4", 00:19:04.100 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:04.100 "is_configured": true, 00:19:04.100 "data_offset": 2048, 00:19:04.100 "data_size": 63488 00:19:04.100 } 00:19:04.100 ] 00:19:04.100 }' 00:19:04.100 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:04.100 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:04.100 14:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:04.100 14:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:04.100 14:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:04.100 14:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.100 14:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.100 [2024-11-20 14:29:43.023060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:04.100 [2024-11-20 14:29:43.036370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:19:04.100 14:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.100 14:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:04.100 [2024-11-20 14:29:43.045177] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:05.478 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:05.478 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:05.478 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:05.478 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:05.478 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:05.478 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.478 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.478 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.478 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.478 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.478 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:05.478 "name": "raid_bdev1", 00:19:05.478 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:05.478 "strip_size_kb": 64, 00:19:05.478 "state": "online", 00:19:05.478 "raid_level": "raid5f", 00:19:05.478 "superblock": true, 00:19:05.478 "num_base_bdevs": 4, 00:19:05.478 "num_base_bdevs_discovered": 4, 00:19:05.478 "num_base_bdevs_operational": 4, 00:19:05.478 "process": { 00:19:05.478 "type": "rebuild", 00:19:05.478 "target": "spare", 00:19:05.478 "progress": { 00:19:05.478 "blocks": 17280, 00:19:05.478 "percent": 9 00:19:05.478 } 00:19:05.478 }, 00:19:05.478 "base_bdevs_list": [ 00:19:05.478 { 00:19:05.478 "name": "spare", 00:19:05.479 "uuid": "9c67fbbe-30c1-5d0f-98cd-27a89115be47", 00:19:05.479 "is_configured": true, 00:19:05.479 "data_offset": 2048, 00:19:05.479 "data_size": 63488 00:19:05.479 }, 00:19:05.479 { 00:19:05.479 "name": "BaseBdev2", 00:19:05.479 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:05.479 "is_configured": true, 00:19:05.479 "data_offset": 2048, 00:19:05.479 "data_size": 63488 00:19:05.479 }, 00:19:05.479 { 00:19:05.479 "name": "BaseBdev3", 00:19:05.479 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:05.479 "is_configured": true, 00:19:05.479 "data_offset": 2048, 00:19:05.479 "data_size": 63488 00:19:05.479 }, 00:19:05.479 { 00:19:05.479 "name": "BaseBdev4", 00:19:05.479 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:05.479 "is_configured": true, 00:19:05.479 "data_offset": 2048, 00:19:05.479 "data_size": 63488 00:19:05.479 } 00:19:05.479 ] 00:19:05.479 }' 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:05.479 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=691 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:05.479 "name": "raid_bdev1", 00:19:05.479 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:05.479 "strip_size_kb": 64, 00:19:05.479 "state": "online", 00:19:05.479 "raid_level": "raid5f", 00:19:05.479 "superblock": true, 00:19:05.479 "num_base_bdevs": 4, 00:19:05.479 "num_base_bdevs_discovered": 4, 00:19:05.479 "num_base_bdevs_operational": 4, 00:19:05.479 "process": { 00:19:05.479 "type": "rebuild", 00:19:05.479 "target": "spare", 00:19:05.479 "progress": { 00:19:05.479 "blocks": 21120, 00:19:05.479 "percent": 11 00:19:05.479 } 00:19:05.479 }, 00:19:05.479 "base_bdevs_list": [ 00:19:05.479 { 00:19:05.479 "name": "spare", 00:19:05.479 "uuid": "9c67fbbe-30c1-5d0f-98cd-27a89115be47", 00:19:05.479 "is_configured": true, 00:19:05.479 "data_offset": 2048, 00:19:05.479 "data_size": 63488 00:19:05.479 }, 00:19:05.479 { 00:19:05.479 "name": "BaseBdev2", 00:19:05.479 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:05.479 "is_configured": true, 00:19:05.479 "data_offset": 2048, 00:19:05.479 "data_size": 63488 00:19:05.479 }, 00:19:05.479 { 00:19:05.479 "name": "BaseBdev3", 00:19:05.479 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:05.479 "is_configured": true, 00:19:05.479 "data_offset": 2048, 00:19:05.479 "data_size": 63488 00:19:05.479 }, 00:19:05.479 { 00:19:05.479 "name": "BaseBdev4", 00:19:05.479 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:05.479 "is_configured": true, 00:19:05.479 "data_offset": 2048, 00:19:05.479 "data_size": 63488 00:19:05.479 } 00:19:05.479 ] 00:19:05.479 }' 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.479 14:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:06.899 14:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:06.899 14:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:06.899 14:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:06.899 14:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:06.899 14:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:06.899 14:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:06.899 14:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.899 14:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.899 14:29:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.899 14:29:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.899 14:29:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.899 14:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:06.899 "name": "raid_bdev1", 00:19:06.899 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:06.899 "strip_size_kb": 64, 00:19:06.899 "state": "online", 00:19:06.899 "raid_level": "raid5f", 00:19:06.899 "superblock": true, 00:19:06.899 "num_base_bdevs": 4, 00:19:06.899 "num_base_bdevs_discovered": 4, 00:19:06.899 "num_base_bdevs_operational": 4, 00:19:06.899 "process": { 00:19:06.899 "type": "rebuild", 00:19:06.899 "target": "spare", 00:19:06.899 "progress": { 00:19:06.899 "blocks": 44160, 00:19:06.899 "percent": 23 00:19:06.899 } 00:19:06.899 }, 00:19:06.899 "base_bdevs_list": [ 00:19:06.899 { 00:19:06.899 "name": "spare", 00:19:06.899 "uuid": "9c67fbbe-30c1-5d0f-98cd-27a89115be47", 00:19:06.899 "is_configured": true, 00:19:06.899 "data_offset": 2048, 00:19:06.899 "data_size": 63488 00:19:06.899 }, 00:19:06.899 { 00:19:06.899 "name": "BaseBdev2", 00:19:06.899 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:06.899 "is_configured": true, 00:19:06.899 "data_offset": 2048, 00:19:06.899 "data_size": 63488 00:19:06.899 }, 00:19:06.899 { 00:19:06.899 "name": "BaseBdev3", 00:19:06.899 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:06.899 "is_configured": true, 00:19:06.899 "data_offset": 2048, 00:19:06.899 "data_size": 63488 00:19:06.899 }, 00:19:06.899 { 00:19:06.899 "name": "BaseBdev4", 00:19:06.899 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:06.899 "is_configured": true, 00:19:06.899 "data_offset": 2048, 00:19:06.899 "data_size": 63488 00:19:06.899 } 00:19:06.899 ] 00:19:06.899 }' 00:19:06.899 14:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:06.899 14:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:06.899 14:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:06.899 14:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:06.899 14:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:07.834 14:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:07.834 14:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:07.834 14:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.834 14:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:07.834 14:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:07.834 14:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.834 14:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.834 14:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.834 14:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.834 14:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.834 14:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.834 14:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:07.834 "name": "raid_bdev1", 00:19:07.834 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:07.834 "strip_size_kb": 64, 00:19:07.834 "state": "online", 00:19:07.834 "raid_level": "raid5f", 00:19:07.834 "superblock": true, 00:19:07.834 "num_base_bdevs": 4, 00:19:07.834 "num_base_bdevs_discovered": 4, 00:19:07.834 "num_base_bdevs_operational": 4, 00:19:07.834 "process": { 00:19:07.834 "type": "rebuild", 00:19:07.834 "target": "spare", 00:19:07.834 "progress": { 00:19:07.834 "blocks": 65280, 00:19:07.834 "percent": 34 00:19:07.834 } 00:19:07.834 }, 00:19:07.834 "base_bdevs_list": [ 00:19:07.834 { 00:19:07.834 "name": "spare", 00:19:07.834 "uuid": "9c67fbbe-30c1-5d0f-98cd-27a89115be47", 00:19:07.834 "is_configured": true, 00:19:07.834 "data_offset": 2048, 00:19:07.834 "data_size": 63488 00:19:07.834 }, 00:19:07.834 { 00:19:07.834 "name": "BaseBdev2", 00:19:07.834 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:07.834 "is_configured": true, 00:19:07.834 "data_offset": 2048, 00:19:07.834 "data_size": 63488 00:19:07.834 }, 00:19:07.834 { 00:19:07.834 "name": "BaseBdev3", 00:19:07.834 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:07.834 "is_configured": true, 00:19:07.834 "data_offset": 2048, 00:19:07.834 "data_size": 63488 00:19:07.834 }, 00:19:07.834 { 00:19:07.834 "name": "BaseBdev4", 00:19:07.834 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:07.834 "is_configured": true, 00:19:07.834 "data_offset": 2048, 00:19:07.834 "data_size": 63488 00:19:07.834 } 00:19:07.834 ] 00:19:07.834 }' 00:19:07.834 14:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:07.834 14:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:07.834 14:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.834 14:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:07.834 14:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:08.769 14:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:08.769 14:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:08.769 14:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:08.769 14:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:08.769 14:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:08.769 14:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:08.769 14:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.769 14:29:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.769 14:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.769 14:29:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.769 14:29:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.029 14:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.029 "name": "raid_bdev1", 00:19:09.029 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:09.029 "strip_size_kb": 64, 00:19:09.029 "state": "online", 00:19:09.029 "raid_level": "raid5f", 00:19:09.029 "superblock": true, 00:19:09.029 "num_base_bdevs": 4, 00:19:09.029 "num_base_bdevs_discovered": 4, 00:19:09.029 "num_base_bdevs_operational": 4, 00:19:09.029 "process": { 00:19:09.029 "type": "rebuild", 00:19:09.029 "target": "spare", 00:19:09.029 "progress": { 00:19:09.029 "blocks": 88320, 00:19:09.029 "percent": 46 00:19:09.029 } 00:19:09.029 }, 00:19:09.029 "base_bdevs_list": [ 00:19:09.029 { 00:19:09.029 "name": "spare", 00:19:09.029 "uuid": "9c67fbbe-30c1-5d0f-98cd-27a89115be47", 00:19:09.029 "is_configured": true, 00:19:09.029 "data_offset": 2048, 00:19:09.029 "data_size": 63488 00:19:09.029 }, 00:19:09.029 { 00:19:09.029 "name": "BaseBdev2", 00:19:09.029 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:09.029 "is_configured": true, 00:19:09.029 "data_offset": 2048, 00:19:09.029 "data_size": 63488 00:19:09.029 }, 00:19:09.029 { 00:19:09.029 "name": "BaseBdev3", 00:19:09.029 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:09.029 "is_configured": true, 00:19:09.029 "data_offset": 2048, 00:19:09.029 "data_size": 63488 00:19:09.029 }, 00:19:09.029 { 00:19:09.029 "name": "BaseBdev4", 00:19:09.029 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:09.029 "is_configured": true, 00:19:09.029 "data_offset": 2048, 00:19:09.029 "data_size": 63488 00:19:09.029 } 00:19:09.029 ] 00:19:09.029 }' 00:19:09.029 14:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.029 14:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:09.029 14:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.029 14:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:09.029 14:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:09.967 14:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:09.967 14:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:09.967 14:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.967 14:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:09.967 14:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:09.967 14:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.967 14:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.967 14:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.967 14:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.967 14:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.967 14:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.967 14:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.967 "name": "raid_bdev1", 00:19:09.967 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:09.967 "strip_size_kb": 64, 00:19:09.967 "state": "online", 00:19:09.967 "raid_level": "raid5f", 00:19:09.967 "superblock": true, 00:19:09.967 "num_base_bdevs": 4, 00:19:09.967 "num_base_bdevs_discovered": 4, 00:19:09.967 "num_base_bdevs_operational": 4, 00:19:09.967 "process": { 00:19:09.967 "type": "rebuild", 00:19:09.967 "target": "spare", 00:19:09.967 "progress": { 00:19:09.967 "blocks": 109440, 00:19:09.967 "percent": 57 00:19:09.967 } 00:19:09.967 }, 00:19:09.967 "base_bdevs_list": [ 00:19:09.967 { 00:19:09.967 "name": "spare", 00:19:09.967 "uuid": "9c67fbbe-30c1-5d0f-98cd-27a89115be47", 00:19:09.967 "is_configured": true, 00:19:09.967 "data_offset": 2048, 00:19:09.967 "data_size": 63488 00:19:09.967 }, 00:19:09.967 { 00:19:09.967 "name": "BaseBdev2", 00:19:09.967 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:09.967 "is_configured": true, 00:19:09.967 "data_offset": 2048, 00:19:09.967 "data_size": 63488 00:19:09.967 }, 00:19:09.967 { 00:19:09.967 "name": "BaseBdev3", 00:19:09.967 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:09.967 "is_configured": true, 00:19:09.967 "data_offset": 2048, 00:19:09.967 "data_size": 63488 00:19:09.967 }, 00:19:09.967 { 00:19:09.967 "name": "BaseBdev4", 00:19:09.967 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:09.967 "is_configured": true, 00:19:09.967 "data_offset": 2048, 00:19:09.967 "data_size": 63488 00:19:09.967 } 00:19:09.967 ] 00:19:09.967 }' 00:19:09.968 14:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.226 14:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:10.226 14:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.226 14:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:10.226 14:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:11.163 14:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:11.163 14:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:11.163 14:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.163 14:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:11.163 14:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:11.163 14:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.163 14:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.163 14:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.163 14:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.163 14:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.163 14:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.163 14:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.163 "name": "raid_bdev1", 00:19:11.163 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:11.163 "strip_size_kb": 64, 00:19:11.163 "state": "online", 00:19:11.163 "raid_level": "raid5f", 00:19:11.163 "superblock": true, 00:19:11.163 "num_base_bdevs": 4, 00:19:11.163 "num_base_bdevs_discovered": 4, 00:19:11.163 "num_base_bdevs_operational": 4, 00:19:11.163 "process": { 00:19:11.163 "type": "rebuild", 00:19:11.163 "target": "spare", 00:19:11.163 "progress": { 00:19:11.163 "blocks": 132480, 00:19:11.163 "percent": 69 00:19:11.163 } 00:19:11.163 }, 00:19:11.163 "base_bdevs_list": [ 00:19:11.163 { 00:19:11.163 "name": "spare", 00:19:11.163 "uuid": "9c67fbbe-30c1-5d0f-98cd-27a89115be47", 00:19:11.163 "is_configured": true, 00:19:11.163 "data_offset": 2048, 00:19:11.163 "data_size": 63488 00:19:11.163 }, 00:19:11.163 { 00:19:11.163 "name": "BaseBdev2", 00:19:11.163 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:11.163 "is_configured": true, 00:19:11.163 "data_offset": 2048, 00:19:11.163 "data_size": 63488 00:19:11.163 }, 00:19:11.163 { 00:19:11.163 "name": "BaseBdev3", 00:19:11.163 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:11.163 "is_configured": true, 00:19:11.163 "data_offset": 2048, 00:19:11.163 "data_size": 63488 00:19:11.163 }, 00:19:11.163 { 00:19:11.163 "name": "BaseBdev4", 00:19:11.163 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:11.163 "is_configured": true, 00:19:11.163 "data_offset": 2048, 00:19:11.163 "data_size": 63488 00:19:11.163 } 00:19:11.163 ] 00:19:11.163 }' 00:19:11.163 14:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.421 14:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:11.421 14:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.421 14:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:11.421 14:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:12.358 14:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:12.358 14:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:12.358 14:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.358 14:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:12.358 14:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:12.358 14:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.358 14:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.358 14:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.358 14:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.358 14:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.358 14:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.358 14:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.358 "name": "raid_bdev1", 00:19:12.358 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:12.358 "strip_size_kb": 64, 00:19:12.358 "state": "online", 00:19:12.358 "raid_level": "raid5f", 00:19:12.358 "superblock": true, 00:19:12.358 "num_base_bdevs": 4, 00:19:12.358 "num_base_bdevs_discovered": 4, 00:19:12.358 "num_base_bdevs_operational": 4, 00:19:12.358 "process": { 00:19:12.358 "type": "rebuild", 00:19:12.358 "target": "spare", 00:19:12.358 "progress": { 00:19:12.358 "blocks": 155520, 00:19:12.358 "percent": 81 00:19:12.358 } 00:19:12.358 }, 00:19:12.358 "base_bdevs_list": [ 00:19:12.358 { 00:19:12.358 "name": "spare", 00:19:12.358 "uuid": "9c67fbbe-30c1-5d0f-98cd-27a89115be47", 00:19:12.358 "is_configured": true, 00:19:12.358 "data_offset": 2048, 00:19:12.358 "data_size": 63488 00:19:12.358 }, 00:19:12.358 { 00:19:12.358 "name": "BaseBdev2", 00:19:12.358 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:12.358 "is_configured": true, 00:19:12.358 "data_offset": 2048, 00:19:12.358 "data_size": 63488 00:19:12.358 }, 00:19:12.358 { 00:19:12.358 "name": "BaseBdev3", 00:19:12.358 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:12.358 "is_configured": true, 00:19:12.358 "data_offset": 2048, 00:19:12.358 "data_size": 63488 00:19:12.358 }, 00:19:12.358 { 00:19:12.358 "name": "BaseBdev4", 00:19:12.358 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:12.358 "is_configured": true, 00:19:12.358 "data_offset": 2048, 00:19:12.358 "data_size": 63488 00:19:12.358 } 00:19:12.358 ] 00:19:12.358 }' 00:19:12.358 14:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.358 14:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:12.630 14:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.630 14:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:12.630 14:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:13.578 14:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:13.578 14:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.578 14:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.578 14:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.578 14:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.578 14:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.578 14:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.578 14:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.578 14:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.578 14:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.578 14:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.578 14:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.578 "name": "raid_bdev1", 00:19:13.578 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:13.578 "strip_size_kb": 64, 00:19:13.578 "state": "online", 00:19:13.578 "raid_level": "raid5f", 00:19:13.578 "superblock": true, 00:19:13.578 "num_base_bdevs": 4, 00:19:13.578 "num_base_bdevs_discovered": 4, 00:19:13.578 "num_base_bdevs_operational": 4, 00:19:13.578 "process": { 00:19:13.578 "type": "rebuild", 00:19:13.578 "target": "spare", 00:19:13.578 "progress": { 00:19:13.578 "blocks": 176640, 00:19:13.578 "percent": 92 00:19:13.578 } 00:19:13.578 }, 00:19:13.578 "base_bdevs_list": [ 00:19:13.578 { 00:19:13.578 "name": "spare", 00:19:13.578 "uuid": "9c67fbbe-30c1-5d0f-98cd-27a89115be47", 00:19:13.578 "is_configured": true, 00:19:13.578 "data_offset": 2048, 00:19:13.578 "data_size": 63488 00:19:13.578 }, 00:19:13.578 { 00:19:13.578 "name": "BaseBdev2", 00:19:13.578 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:13.578 "is_configured": true, 00:19:13.578 "data_offset": 2048, 00:19:13.578 "data_size": 63488 00:19:13.578 }, 00:19:13.578 { 00:19:13.578 "name": "BaseBdev3", 00:19:13.578 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:13.578 "is_configured": true, 00:19:13.578 "data_offset": 2048, 00:19:13.578 "data_size": 63488 00:19:13.578 }, 00:19:13.578 { 00:19:13.578 "name": "BaseBdev4", 00:19:13.578 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:13.578 "is_configured": true, 00:19:13.578 "data_offset": 2048, 00:19:13.578 "data_size": 63488 00:19:13.578 } 00:19:13.578 ] 00:19:13.578 }' 00:19:13.578 14:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.578 14:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.578 14:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.578 14:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.578 14:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:14.519 [2024-11-20 14:29:53.154332] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:14.519 [2024-11-20 14:29:53.154680] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:14.519 [2024-11-20 14:29:53.154898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:14.778 "name": "raid_bdev1", 00:19:14.778 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:14.778 "strip_size_kb": 64, 00:19:14.778 "state": "online", 00:19:14.778 "raid_level": "raid5f", 00:19:14.778 "superblock": true, 00:19:14.778 "num_base_bdevs": 4, 00:19:14.778 "num_base_bdevs_discovered": 4, 00:19:14.778 "num_base_bdevs_operational": 4, 00:19:14.778 "base_bdevs_list": [ 00:19:14.778 { 00:19:14.778 "name": "spare", 00:19:14.778 "uuid": "9c67fbbe-30c1-5d0f-98cd-27a89115be47", 00:19:14.778 "is_configured": true, 00:19:14.778 "data_offset": 2048, 00:19:14.778 "data_size": 63488 00:19:14.778 }, 00:19:14.778 { 00:19:14.778 "name": "BaseBdev2", 00:19:14.778 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:14.778 "is_configured": true, 00:19:14.778 "data_offset": 2048, 00:19:14.778 "data_size": 63488 00:19:14.778 }, 00:19:14.778 { 00:19:14.778 "name": "BaseBdev3", 00:19:14.778 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:14.778 "is_configured": true, 00:19:14.778 "data_offset": 2048, 00:19:14.778 "data_size": 63488 00:19:14.778 }, 00:19:14.778 { 00:19:14.778 "name": "BaseBdev4", 00:19:14.778 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:14.778 "is_configured": true, 00:19:14.778 "data_offset": 2048, 00:19:14.778 "data_size": 63488 00:19:14.778 } 00:19:14.778 ] 00:19:14.778 }' 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.778 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.037 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.037 "name": "raid_bdev1", 00:19:15.037 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:15.037 "strip_size_kb": 64, 00:19:15.037 "state": "online", 00:19:15.037 "raid_level": "raid5f", 00:19:15.037 "superblock": true, 00:19:15.037 "num_base_bdevs": 4, 00:19:15.037 "num_base_bdevs_discovered": 4, 00:19:15.037 "num_base_bdevs_operational": 4, 00:19:15.037 "base_bdevs_list": [ 00:19:15.037 { 00:19:15.037 "name": "spare", 00:19:15.037 "uuid": "9c67fbbe-30c1-5d0f-98cd-27a89115be47", 00:19:15.037 "is_configured": true, 00:19:15.037 "data_offset": 2048, 00:19:15.037 "data_size": 63488 00:19:15.037 }, 00:19:15.037 { 00:19:15.037 "name": "BaseBdev2", 00:19:15.037 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:15.037 "is_configured": true, 00:19:15.037 "data_offset": 2048, 00:19:15.037 "data_size": 63488 00:19:15.037 }, 00:19:15.037 { 00:19:15.037 "name": "BaseBdev3", 00:19:15.037 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:15.037 "is_configured": true, 00:19:15.037 "data_offset": 2048, 00:19:15.037 "data_size": 63488 00:19:15.037 }, 00:19:15.037 { 00:19:15.037 "name": "BaseBdev4", 00:19:15.037 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:15.037 "is_configured": true, 00:19:15.037 "data_offset": 2048, 00:19:15.037 "data_size": 63488 00:19:15.037 } 00:19:15.037 ] 00:19:15.037 }' 00:19:15.037 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.037 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:15.037 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.037 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:15.037 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:15.037 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.037 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.037 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:15.037 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:15.037 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:15.037 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.037 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.037 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.037 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.037 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.037 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.037 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.037 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.037 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.037 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.037 "name": "raid_bdev1", 00:19:15.037 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:15.037 "strip_size_kb": 64, 00:19:15.037 "state": "online", 00:19:15.037 "raid_level": "raid5f", 00:19:15.037 "superblock": true, 00:19:15.037 "num_base_bdevs": 4, 00:19:15.037 "num_base_bdevs_discovered": 4, 00:19:15.037 "num_base_bdevs_operational": 4, 00:19:15.037 "base_bdevs_list": [ 00:19:15.037 { 00:19:15.037 "name": "spare", 00:19:15.037 "uuid": "9c67fbbe-30c1-5d0f-98cd-27a89115be47", 00:19:15.037 "is_configured": true, 00:19:15.037 "data_offset": 2048, 00:19:15.037 "data_size": 63488 00:19:15.037 }, 00:19:15.037 { 00:19:15.037 "name": "BaseBdev2", 00:19:15.037 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:15.037 "is_configured": true, 00:19:15.037 "data_offset": 2048, 00:19:15.037 "data_size": 63488 00:19:15.037 }, 00:19:15.037 { 00:19:15.037 "name": "BaseBdev3", 00:19:15.037 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:15.037 "is_configured": true, 00:19:15.037 "data_offset": 2048, 00:19:15.037 "data_size": 63488 00:19:15.037 }, 00:19:15.037 { 00:19:15.038 "name": "BaseBdev4", 00:19:15.038 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:15.038 "is_configured": true, 00:19:15.038 "data_offset": 2048, 00:19:15.038 "data_size": 63488 00:19:15.038 } 00:19:15.038 ] 00:19:15.038 }' 00:19:15.038 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.038 14:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.612 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:15.612 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.612 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.612 [2024-11-20 14:29:54.430511] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:15.612 [2024-11-20 14:29:54.430551] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:15.612 [2024-11-20 14:29:54.430652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:15.612 [2024-11-20 14:29:54.430782] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:15.612 [2024-11-20 14:29:54.430813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:15.612 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.612 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:15.612 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.612 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.612 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.612 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.612 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:15.612 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:15.612 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:15.612 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:15.613 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:15.613 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:15.613 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:15.613 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:15.613 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:15.613 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:15.613 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:15.613 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:15.613 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:15.878 /dev/nbd0 00:19:15.878 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:15.878 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:15.878 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:15.878 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:15.878 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:15.878 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:15.878 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:15.878 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:15.878 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:15.878 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:15.878 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:15.878 1+0 records in 00:19:15.878 1+0 records out 00:19:15.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324471 s, 12.6 MB/s 00:19:15.878 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:15.878 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:15.878 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:15.878 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:15.878 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:15.878 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:15.878 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:15.878 14:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:16.465 /dev/nbd1 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:16.465 1+0 records in 00:19:16.465 1+0 records out 00:19:16.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390798 s, 10.5 MB/s 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:16.465 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:16.736 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:16.736 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:16.736 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:16.736 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:16.736 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:16.736 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:16.736 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:16.736 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:16.736 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:16.736 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:17.010 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:17.010 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:17.010 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:17.010 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:17.010 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:17.010 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:17.010 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:17.010 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:17.010 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:17.010 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:17.010 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.010 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.010 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.010 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:17.010 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.010 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.010 [2024-11-20 14:29:55.982551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:17.010 [2024-11-20 14:29:55.982647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.010 [2024-11-20 14:29:55.982710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:17.010 [2024-11-20 14:29:55.982738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.010 [2024-11-20 14:29:55.986687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.010 [2024-11-20 14:29:55.986751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:17.010 [2024-11-20 14:29:55.986945] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:17.010 [2024-11-20 14:29:55.987093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:17.010 [2024-11-20 14:29:55.987450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:17.010 spare 00:19:17.010 [2024-11-20 14:29:55.987896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:17.010 [2024-11-20 14:29:55.988087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:17.010 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.010 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:17.010 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.010 14:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.283 [2024-11-20 14:29:56.088287] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:17.283 [2024-11-20 14:29:56.088388] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:17.283 [2024-11-20 14:29:56.088835] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:19:17.283 [2024-11-20 14:29:56.095422] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:17.283 [2024-11-20 14:29:56.095454] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:17.283 [2024-11-20 14:29:56.095748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.283 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.283 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:17.283 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.283 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.283 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:17.283 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:17.283 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:17.283 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.283 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.283 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.284 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.284 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.284 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.284 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.284 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.284 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.284 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.284 "name": "raid_bdev1", 00:19:17.284 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:17.284 "strip_size_kb": 64, 00:19:17.284 "state": "online", 00:19:17.284 "raid_level": "raid5f", 00:19:17.284 "superblock": true, 00:19:17.284 "num_base_bdevs": 4, 00:19:17.284 "num_base_bdevs_discovered": 4, 00:19:17.284 "num_base_bdevs_operational": 4, 00:19:17.284 "base_bdevs_list": [ 00:19:17.284 { 00:19:17.284 "name": "spare", 00:19:17.284 "uuid": "9c67fbbe-30c1-5d0f-98cd-27a89115be47", 00:19:17.284 "is_configured": true, 00:19:17.284 "data_offset": 2048, 00:19:17.284 "data_size": 63488 00:19:17.284 }, 00:19:17.284 { 00:19:17.284 "name": "BaseBdev2", 00:19:17.284 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:17.284 "is_configured": true, 00:19:17.284 "data_offset": 2048, 00:19:17.284 "data_size": 63488 00:19:17.284 }, 00:19:17.284 { 00:19:17.284 "name": "BaseBdev3", 00:19:17.284 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:17.284 "is_configured": true, 00:19:17.284 "data_offset": 2048, 00:19:17.284 "data_size": 63488 00:19:17.284 }, 00:19:17.284 { 00:19:17.284 "name": "BaseBdev4", 00:19:17.284 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:17.284 "is_configured": true, 00:19:17.284 "data_offset": 2048, 00:19:17.284 "data_size": 63488 00:19:17.284 } 00:19:17.284 ] 00:19:17.284 }' 00:19:17.284 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.284 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.852 "name": "raid_bdev1", 00:19:17.852 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:17.852 "strip_size_kb": 64, 00:19:17.852 "state": "online", 00:19:17.852 "raid_level": "raid5f", 00:19:17.852 "superblock": true, 00:19:17.852 "num_base_bdevs": 4, 00:19:17.852 "num_base_bdevs_discovered": 4, 00:19:17.852 "num_base_bdevs_operational": 4, 00:19:17.852 "base_bdevs_list": [ 00:19:17.852 { 00:19:17.852 "name": "spare", 00:19:17.852 "uuid": "9c67fbbe-30c1-5d0f-98cd-27a89115be47", 00:19:17.852 "is_configured": true, 00:19:17.852 "data_offset": 2048, 00:19:17.852 "data_size": 63488 00:19:17.852 }, 00:19:17.852 { 00:19:17.852 "name": "BaseBdev2", 00:19:17.852 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:17.852 "is_configured": true, 00:19:17.852 "data_offset": 2048, 00:19:17.852 "data_size": 63488 00:19:17.852 }, 00:19:17.852 { 00:19:17.852 "name": "BaseBdev3", 00:19:17.852 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:17.852 "is_configured": true, 00:19:17.852 "data_offset": 2048, 00:19:17.852 "data_size": 63488 00:19:17.852 }, 00:19:17.852 { 00:19:17.852 "name": "BaseBdev4", 00:19:17.852 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:17.852 "is_configured": true, 00:19:17.852 "data_offset": 2048, 00:19:17.852 "data_size": 63488 00:19:17.852 } 00:19:17.852 ] 00:19:17.852 }' 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.852 [2024-11-20 14:29:56.823457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.852 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.111 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.111 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.111 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.111 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.111 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.111 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.111 "name": "raid_bdev1", 00:19:18.111 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:18.111 "strip_size_kb": 64, 00:19:18.111 "state": "online", 00:19:18.111 "raid_level": "raid5f", 00:19:18.111 "superblock": true, 00:19:18.111 "num_base_bdevs": 4, 00:19:18.111 "num_base_bdevs_discovered": 3, 00:19:18.111 "num_base_bdevs_operational": 3, 00:19:18.111 "base_bdevs_list": [ 00:19:18.111 { 00:19:18.111 "name": null, 00:19:18.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.111 "is_configured": false, 00:19:18.111 "data_offset": 0, 00:19:18.111 "data_size": 63488 00:19:18.111 }, 00:19:18.111 { 00:19:18.111 "name": "BaseBdev2", 00:19:18.111 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:18.111 "is_configured": true, 00:19:18.111 "data_offset": 2048, 00:19:18.111 "data_size": 63488 00:19:18.111 }, 00:19:18.111 { 00:19:18.111 "name": "BaseBdev3", 00:19:18.111 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:18.111 "is_configured": true, 00:19:18.111 "data_offset": 2048, 00:19:18.111 "data_size": 63488 00:19:18.111 }, 00:19:18.111 { 00:19:18.111 "name": "BaseBdev4", 00:19:18.111 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:18.111 "is_configured": true, 00:19:18.111 "data_offset": 2048, 00:19:18.111 "data_size": 63488 00:19:18.111 } 00:19:18.111 ] 00:19:18.111 }' 00:19:18.111 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.111 14:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.370 14:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:18.370 14:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.370 14:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.370 [2024-11-20 14:29:57.319615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:18.370 [2024-11-20 14:29:57.319906] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:18.370 [2024-11-20 14:29:57.319940] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:18.370 [2024-11-20 14:29:57.320015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:18.370 [2024-11-20 14:29:57.333403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:19:18.370 14:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.370 14:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:18.370 [2024-11-20 14:29:57.342302] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:19.745 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:19.745 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.745 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:19.745 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:19.745 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.745 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.745 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.745 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.745 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.745 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.745 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.745 "name": "raid_bdev1", 00:19:19.745 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:19.745 "strip_size_kb": 64, 00:19:19.745 "state": "online", 00:19:19.745 "raid_level": "raid5f", 00:19:19.745 "superblock": true, 00:19:19.745 "num_base_bdevs": 4, 00:19:19.745 "num_base_bdevs_discovered": 4, 00:19:19.745 "num_base_bdevs_operational": 4, 00:19:19.745 "process": { 00:19:19.745 "type": "rebuild", 00:19:19.745 "target": "spare", 00:19:19.745 "progress": { 00:19:19.745 "blocks": 17280, 00:19:19.745 "percent": 9 00:19:19.745 } 00:19:19.745 }, 00:19:19.745 "base_bdevs_list": [ 00:19:19.745 { 00:19:19.745 "name": "spare", 00:19:19.745 "uuid": "9c67fbbe-30c1-5d0f-98cd-27a89115be47", 00:19:19.745 "is_configured": true, 00:19:19.745 "data_offset": 2048, 00:19:19.745 "data_size": 63488 00:19:19.745 }, 00:19:19.745 { 00:19:19.745 "name": "BaseBdev2", 00:19:19.745 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:19.745 "is_configured": true, 00:19:19.745 "data_offset": 2048, 00:19:19.745 "data_size": 63488 00:19:19.745 }, 00:19:19.745 { 00:19:19.745 "name": "BaseBdev3", 00:19:19.745 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:19.745 "is_configured": true, 00:19:19.745 "data_offset": 2048, 00:19:19.745 "data_size": 63488 00:19:19.745 }, 00:19:19.745 { 00:19:19.745 "name": "BaseBdev4", 00:19:19.745 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:19.745 "is_configured": true, 00:19:19.745 "data_offset": 2048, 00:19:19.745 "data_size": 63488 00:19:19.745 } 00:19:19.745 ] 00:19:19.745 }' 00:19:19.745 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.745 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:19.745 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.745 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:19.745 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:19.746 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.746 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.746 [2024-11-20 14:29:58.519620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:19.746 [2024-11-20 14:29:58.555569] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:19.746 [2024-11-20 14:29:58.555725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.746 [2024-11-20 14:29:58.555754] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:19.746 [2024-11-20 14:29:58.555774] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:19.746 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.746 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:19.746 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.746 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.746 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:19.746 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:19.746 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:19.746 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.746 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.746 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.746 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.746 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.746 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.746 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.746 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.746 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.746 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.746 "name": "raid_bdev1", 00:19:19.746 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:19.746 "strip_size_kb": 64, 00:19:19.746 "state": "online", 00:19:19.746 "raid_level": "raid5f", 00:19:19.746 "superblock": true, 00:19:19.746 "num_base_bdevs": 4, 00:19:19.746 "num_base_bdevs_discovered": 3, 00:19:19.746 "num_base_bdevs_operational": 3, 00:19:19.746 "base_bdevs_list": [ 00:19:19.746 { 00:19:19.746 "name": null, 00:19:19.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.746 "is_configured": false, 00:19:19.746 "data_offset": 0, 00:19:19.746 "data_size": 63488 00:19:19.746 }, 00:19:19.746 { 00:19:19.746 "name": "BaseBdev2", 00:19:19.746 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:19.746 "is_configured": true, 00:19:19.746 "data_offset": 2048, 00:19:19.746 "data_size": 63488 00:19:19.746 }, 00:19:19.746 { 00:19:19.746 "name": "BaseBdev3", 00:19:19.746 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:19.746 "is_configured": true, 00:19:19.746 "data_offset": 2048, 00:19:19.746 "data_size": 63488 00:19:19.746 }, 00:19:19.746 { 00:19:19.746 "name": "BaseBdev4", 00:19:19.746 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:19.746 "is_configured": true, 00:19:19.746 "data_offset": 2048, 00:19:19.746 "data_size": 63488 00:19:19.746 } 00:19:19.746 ] 00:19:19.746 }' 00:19:19.746 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.746 14:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.314 14:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:20.314 14:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.314 14:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.314 [2024-11-20 14:29:59.059042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:20.314 [2024-11-20 14:29:59.059126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.314 [2024-11-20 14:29:59.059163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:19:20.314 [2024-11-20 14:29:59.059190] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.314 [2024-11-20 14:29:59.059803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.314 [2024-11-20 14:29:59.059851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:20.314 [2024-11-20 14:29:59.059973] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:20.314 [2024-11-20 14:29:59.060013] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:20.314 [2024-11-20 14:29:59.060029] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:20.314 [2024-11-20 14:29:59.060071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:20.314 [2024-11-20 14:29:59.073263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:19:20.314 spare 00:19:20.314 14:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.314 14:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:20.314 [2024-11-20 14:29:59.082304] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:21.396 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:21.396 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.396 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:21.396 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:21.396 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.396 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.396 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.396 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.396 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.396 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.396 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.396 "name": "raid_bdev1", 00:19:21.396 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:21.396 "strip_size_kb": 64, 00:19:21.396 "state": "online", 00:19:21.396 "raid_level": "raid5f", 00:19:21.396 "superblock": true, 00:19:21.396 "num_base_bdevs": 4, 00:19:21.396 "num_base_bdevs_discovered": 4, 00:19:21.396 "num_base_bdevs_operational": 4, 00:19:21.396 "process": { 00:19:21.396 "type": "rebuild", 00:19:21.396 "target": "spare", 00:19:21.396 "progress": { 00:19:21.396 "blocks": 17280, 00:19:21.396 "percent": 9 00:19:21.396 } 00:19:21.396 }, 00:19:21.396 "base_bdevs_list": [ 00:19:21.396 { 00:19:21.396 "name": "spare", 00:19:21.396 "uuid": "9c67fbbe-30c1-5d0f-98cd-27a89115be47", 00:19:21.396 "is_configured": true, 00:19:21.396 "data_offset": 2048, 00:19:21.396 "data_size": 63488 00:19:21.396 }, 00:19:21.396 { 00:19:21.396 "name": "BaseBdev2", 00:19:21.396 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:21.396 "is_configured": true, 00:19:21.396 "data_offset": 2048, 00:19:21.396 "data_size": 63488 00:19:21.396 }, 00:19:21.396 { 00:19:21.396 "name": "BaseBdev3", 00:19:21.396 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:21.396 "is_configured": true, 00:19:21.396 "data_offset": 2048, 00:19:21.396 "data_size": 63488 00:19:21.396 }, 00:19:21.396 { 00:19:21.396 "name": "BaseBdev4", 00:19:21.396 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:21.396 "is_configured": true, 00:19:21.396 "data_offset": 2048, 00:19:21.396 "data_size": 63488 00:19:21.396 } 00:19:21.396 ] 00:19:21.396 }' 00:19:21.396 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.396 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:21.396 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.396 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:21.396 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:21.396 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.396 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.396 [2024-11-20 14:30:00.227449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:21.396 [2024-11-20 14:30:00.295395] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:21.396 [2024-11-20 14:30:00.295732] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.396 [2024-11-20 14:30:00.295777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:21.396 [2024-11-20 14:30:00.295790] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:21.653 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.653 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:21.653 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.653 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.653 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:21.653 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:21.653 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:21.653 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.653 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.653 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.653 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.653 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.653 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.653 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.653 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.653 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.653 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.653 "name": "raid_bdev1", 00:19:21.653 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:21.653 "strip_size_kb": 64, 00:19:21.654 "state": "online", 00:19:21.654 "raid_level": "raid5f", 00:19:21.654 "superblock": true, 00:19:21.654 "num_base_bdevs": 4, 00:19:21.654 "num_base_bdevs_discovered": 3, 00:19:21.654 "num_base_bdevs_operational": 3, 00:19:21.654 "base_bdevs_list": [ 00:19:21.654 { 00:19:21.654 "name": null, 00:19:21.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.654 "is_configured": false, 00:19:21.654 "data_offset": 0, 00:19:21.654 "data_size": 63488 00:19:21.654 }, 00:19:21.654 { 00:19:21.654 "name": "BaseBdev2", 00:19:21.654 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:21.654 "is_configured": true, 00:19:21.654 "data_offset": 2048, 00:19:21.654 "data_size": 63488 00:19:21.654 }, 00:19:21.654 { 00:19:21.654 "name": "BaseBdev3", 00:19:21.654 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:21.654 "is_configured": true, 00:19:21.654 "data_offset": 2048, 00:19:21.654 "data_size": 63488 00:19:21.654 }, 00:19:21.654 { 00:19:21.654 "name": "BaseBdev4", 00:19:21.654 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:21.654 "is_configured": true, 00:19:21.654 "data_offset": 2048, 00:19:21.654 "data_size": 63488 00:19:21.654 } 00:19:21.654 ] 00:19:21.654 }' 00:19:21.654 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.654 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.912 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:21.912 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.912 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:21.912 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:21.912 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.912 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.912 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.912 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.912 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.912 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.912 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.912 "name": "raid_bdev1", 00:19:21.912 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:21.913 "strip_size_kb": 64, 00:19:21.913 "state": "online", 00:19:21.913 "raid_level": "raid5f", 00:19:21.913 "superblock": true, 00:19:21.913 "num_base_bdevs": 4, 00:19:21.913 "num_base_bdevs_discovered": 3, 00:19:21.913 "num_base_bdevs_operational": 3, 00:19:21.913 "base_bdevs_list": [ 00:19:21.913 { 00:19:21.913 "name": null, 00:19:21.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.913 "is_configured": false, 00:19:21.913 "data_offset": 0, 00:19:21.913 "data_size": 63488 00:19:21.913 }, 00:19:21.913 { 00:19:21.913 "name": "BaseBdev2", 00:19:21.913 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:21.913 "is_configured": true, 00:19:21.913 "data_offset": 2048, 00:19:21.913 "data_size": 63488 00:19:21.913 }, 00:19:21.913 { 00:19:21.913 "name": "BaseBdev3", 00:19:21.913 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:21.913 "is_configured": true, 00:19:21.913 "data_offset": 2048, 00:19:21.913 "data_size": 63488 00:19:21.913 }, 00:19:21.913 { 00:19:21.913 "name": "BaseBdev4", 00:19:21.913 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:21.913 "is_configured": true, 00:19:21.913 "data_offset": 2048, 00:19:21.913 "data_size": 63488 00:19:21.913 } 00:19:21.913 ] 00:19:21.913 }' 00:19:21.913 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.173 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:22.173 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.173 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:22.173 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:22.173 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.173 14:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.173 14:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.173 14:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:22.173 14:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.173 14:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.173 [2024-11-20 14:30:01.006783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:22.173 [2024-11-20 14:30:01.007019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.173 [2024-11-20 14:30:01.007066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:19:22.173 [2024-11-20 14:30:01.007082] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.173 [2024-11-20 14:30:01.007694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.173 [2024-11-20 14:30:01.007726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:22.173 [2024-11-20 14:30:01.007842] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:22.173 [2024-11-20 14:30:01.007870] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:22.173 [2024-11-20 14:30:01.007888] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:22.173 [2024-11-20 14:30:01.007901] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:22.173 BaseBdev1 00:19:22.173 14:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.173 14:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:23.109 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:23.109 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.109 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.109 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:23.109 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.109 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:23.109 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.109 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.109 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.109 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.109 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.109 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.109 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.109 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.109 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.368 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.368 "name": "raid_bdev1", 00:19:23.368 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:23.368 "strip_size_kb": 64, 00:19:23.368 "state": "online", 00:19:23.368 "raid_level": "raid5f", 00:19:23.368 "superblock": true, 00:19:23.368 "num_base_bdevs": 4, 00:19:23.368 "num_base_bdevs_discovered": 3, 00:19:23.368 "num_base_bdevs_operational": 3, 00:19:23.368 "base_bdevs_list": [ 00:19:23.368 { 00:19:23.368 "name": null, 00:19:23.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.368 "is_configured": false, 00:19:23.368 "data_offset": 0, 00:19:23.368 "data_size": 63488 00:19:23.368 }, 00:19:23.368 { 00:19:23.368 "name": "BaseBdev2", 00:19:23.368 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:23.368 "is_configured": true, 00:19:23.368 "data_offset": 2048, 00:19:23.368 "data_size": 63488 00:19:23.368 }, 00:19:23.368 { 00:19:23.368 "name": "BaseBdev3", 00:19:23.368 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:23.368 "is_configured": true, 00:19:23.368 "data_offset": 2048, 00:19:23.368 "data_size": 63488 00:19:23.368 }, 00:19:23.368 { 00:19:23.368 "name": "BaseBdev4", 00:19:23.368 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:23.368 "is_configured": true, 00:19:23.368 "data_offset": 2048, 00:19:23.368 "data_size": 63488 00:19:23.368 } 00:19:23.368 ] 00:19:23.368 }' 00:19:23.368 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.368 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.626 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:23.626 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.626 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:23.626 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:23.626 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.626 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.626 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.626 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.626 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.626 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.626 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.626 "name": "raid_bdev1", 00:19:23.626 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:23.626 "strip_size_kb": 64, 00:19:23.626 "state": "online", 00:19:23.626 "raid_level": "raid5f", 00:19:23.626 "superblock": true, 00:19:23.626 "num_base_bdevs": 4, 00:19:23.626 "num_base_bdevs_discovered": 3, 00:19:23.626 "num_base_bdevs_operational": 3, 00:19:23.626 "base_bdevs_list": [ 00:19:23.626 { 00:19:23.626 "name": null, 00:19:23.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.626 "is_configured": false, 00:19:23.626 "data_offset": 0, 00:19:23.626 "data_size": 63488 00:19:23.626 }, 00:19:23.626 { 00:19:23.626 "name": "BaseBdev2", 00:19:23.626 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:23.626 "is_configured": true, 00:19:23.626 "data_offset": 2048, 00:19:23.626 "data_size": 63488 00:19:23.626 }, 00:19:23.626 { 00:19:23.626 "name": "BaseBdev3", 00:19:23.626 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:23.626 "is_configured": true, 00:19:23.626 "data_offset": 2048, 00:19:23.626 "data_size": 63488 00:19:23.626 }, 00:19:23.626 { 00:19:23.626 "name": "BaseBdev4", 00:19:23.626 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:23.626 "is_configured": true, 00:19:23.626 "data_offset": 2048, 00:19:23.626 "data_size": 63488 00:19:23.626 } 00:19:23.626 ] 00:19:23.626 }' 00:19:23.626 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.885 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:23.885 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.885 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:23.885 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:23.885 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:19:23.885 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:23.885 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:23.885 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.885 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:23.885 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.885 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:23.885 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.885 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.885 [2024-11-20 14:30:02.663311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:23.885 [2024-11-20 14:30:02.663538] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:23.885 [2024-11-20 14:30:02.663562] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:23.885 request: 00:19:23.885 { 00:19:23.885 "base_bdev": "BaseBdev1", 00:19:23.885 "raid_bdev": "raid_bdev1", 00:19:23.885 "method": "bdev_raid_add_base_bdev", 00:19:23.885 "req_id": 1 00:19:23.885 } 00:19:23.885 Got JSON-RPC error response 00:19:23.885 response: 00:19:23.885 { 00:19:23.885 "code": -22, 00:19:23.885 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:23.885 } 00:19:23.885 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:23.885 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:19:23.885 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:23.885 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:23.885 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:23.885 14:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:24.821 14:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:24.821 14:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.821 14:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.821 14:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:24.821 14:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:24.821 14:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:24.821 14:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.821 14:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.821 14:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.821 14:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.821 14:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.821 14:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.821 14:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.821 14:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.821 14:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.821 14:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.821 "name": "raid_bdev1", 00:19:24.821 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:24.821 "strip_size_kb": 64, 00:19:24.821 "state": "online", 00:19:24.821 "raid_level": "raid5f", 00:19:24.821 "superblock": true, 00:19:24.821 "num_base_bdevs": 4, 00:19:24.821 "num_base_bdevs_discovered": 3, 00:19:24.821 "num_base_bdevs_operational": 3, 00:19:24.821 "base_bdevs_list": [ 00:19:24.821 { 00:19:24.821 "name": null, 00:19:24.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.821 "is_configured": false, 00:19:24.821 "data_offset": 0, 00:19:24.821 "data_size": 63488 00:19:24.821 }, 00:19:24.821 { 00:19:24.821 "name": "BaseBdev2", 00:19:24.821 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:24.821 "is_configured": true, 00:19:24.821 "data_offset": 2048, 00:19:24.821 "data_size": 63488 00:19:24.821 }, 00:19:24.821 { 00:19:24.821 "name": "BaseBdev3", 00:19:24.821 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:24.821 "is_configured": true, 00:19:24.821 "data_offset": 2048, 00:19:24.821 "data_size": 63488 00:19:24.821 }, 00:19:24.822 { 00:19:24.822 "name": "BaseBdev4", 00:19:24.822 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:24.822 "is_configured": true, 00:19:24.822 "data_offset": 2048, 00:19:24.822 "data_size": 63488 00:19:24.822 } 00:19:24.822 ] 00:19:24.822 }' 00:19:24.822 14:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.822 14:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:25.395 "name": "raid_bdev1", 00:19:25.395 "uuid": "63a40a6a-1f5d-4d8d-bf58-66a65969b32b", 00:19:25.395 "strip_size_kb": 64, 00:19:25.395 "state": "online", 00:19:25.395 "raid_level": "raid5f", 00:19:25.395 "superblock": true, 00:19:25.395 "num_base_bdevs": 4, 00:19:25.395 "num_base_bdevs_discovered": 3, 00:19:25.395 "num_base_bdevs_operational": 3, 00:19:25.395 "base_bdevs_list": [ 00:19:25.395 { 00:19:25.395 "name": null, 00:19:25.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.395 "is_configured": false, 00:19:25.395 "data_offset": 0, 00:19:25.395 "data_size": 63488 00:19:25.395 }, 00:19:25.395 { 00:19:25.395 "name": "BaseBdev2", 00:19:25.395 "uuid": "ae4d0cd6-1810-53df-8b3e-0cca0a84a3e6", 00:19:25.395 "is_configured": true, 00:19:25.395 "data_offset": 2048, 00:19:25.395 "data_size": 63488 00:19:25.395 }, 00:19:25.395 { 00:19:25.395 "name": "BaseBdev3", 00:19:25.395 "uuid": "fe0cf83f-f676-58ab-acfd-c9adb69c8801", 00:19:25.395 "is_configured": true, 00:19:25.395 "data_offset": 2048, 00:19:25.395 "data_size": 63488 00:19:25.395 }, 00:19:25.395 { 00:19:25.395 "name": "BaseBdev4", 00:19:25.395 "uuid": "8d794c72-8184-5842-8622-841d76350523", 00:19:25.395 "is_configured": true, 00:19:25.395 "data_offset": 2048, 00:19:25.395 "data_size": 63488 00:19:25.395 } 00:19:25.395 ] 00:19:25.395 }' 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85561 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85561 ']' 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85561 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85561 00:19:25.395 killing process with pid 85561 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85561' 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85561 00:19:25.395 Received shutdown signal, test time was about 60.000000 seconds 00:19:25.395 00:19:25.395 Latency(us) 00:19:25.395 [2024-11-20T14:30:04.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.395 [2024-11-20T14:30:04.377Z] =================================================================================================================== 00:19:25.395 [2024-11-20T14:30:04.377Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:25.395 14:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85561 00:19:25.395 [2024-11-20 14:30:04.370040] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:25.395 [2024-11-20 14:30:04.370214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:25.395 [2024-11-20 14:30:04.370341] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:25.395 [2024-11-20 14:30:04.370375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:25.962 [2024-11-20 14:30:04.814946] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:26.899 ************************************ 00:19:26.899 END TEST raid5f_rebuild_test_sb 00:19:26.899 ************************************ 00:19:26.899 14:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:26.899 00:19:26.899 real 0m28.606s 00:19:26.899 user 0m37.150s 00:19:26.899 sys 0m2.896s 00:19:26.899 14:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.899 14:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.158 14:30:05 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:19:27.158 14:30:05 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:19:27.158 14:30:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:27.158 14:30:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:27.158 14:30:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.158 ************************************ 00:19:27.158 START TEST raid_state_function_test_sb_4k 00:19:27.158 ************************************ 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86382 00:19:27.158 Process raid pid: 86382 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86382' 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86382 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86382 ']' 00:19:27.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.158 14:30:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.158 [2024-11-20 14:30:06.004221] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:19:27.158 [2024-11-20 14:30:06.004392] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.417 [2024-11-20 14:30:06.185342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.418 [2024-11-20 14:30:06.319913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.677 [2024-11-20 14:30:06.530089] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:27.677 [2024-11-20 14:30:06.530277] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.244 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.244 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:19:28.244 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:28.244 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.244 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:28.244 [2024-11-20 14:30:07.058896] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:28.244 [2024-11-20 14:30:07.058979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:28.244 [2024-11-20 14:30:07.059015] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:28.244 [2024-11-20 14:30:07.059035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:28.244 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.244 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:28.244 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:28.244 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:28.244 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:28.244 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:28.244 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:28.244 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.244 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.244 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.244 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.244 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.244 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.244 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.244 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:28.244 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.244 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.244 "name": "Existed_Raid", 00:19:28.244 "uuid": "14ac7a7c-02e5-434f-8f9d-cd99c7f84e72", 00:19:28.244 "strip_size_kb": 0, 00:19:28.244 "state": "configuring", 00:19:28.244 "raid_level": "raid1", 00:19:28.244 "superblock": true, 00:19:28.244 "num_base_bdevs": 2, 00:19:28.244 "num_base_bdevs_discovered": 0, 00:19:28.244 "num_base_bdevs_operational": 2, 00:19:28.244 "base_bdevs_list": [ 00:19:28.244 { 00:19:28.244 "name": "BaseBdev1", 00:19:28.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.244 "is_configured": false, 00:19:28.244 "data_offset": 0, 00:19:28.244 "data_size": 0 00:19:28.244 }, 00:19:28.244 { 00:19:28.244 "name": "BaseBdev2", 00:19:28.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.244 "is_configured": false, 00:19:28.244 "data_offset": 0, 00:19:28.244 "data_size": 0 00:19:28.244 } 00:19:28.244 ] 00:19:28.244 }' 00:19:28.244 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.244 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:28.817 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:28.817 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.817 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:28.817 [2024-11-20 14:30:07.651003] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:28.817 [2024-11-20 14:30:07.651056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:28.817 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.817 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:28.817 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.817 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:28.817 [2024-11-20 14:30:07.662983] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:28.817 [2024-11-20 14:30:07.663230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:28.817 [2024-11-20 14:30:07.663386] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:28.817 [2024-11-20 14:30:07.663544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:28.817 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.817 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:19:28.817 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.817 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:28.818 [2024-11-20 14:30:07.713136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:28.818 BaseBdev1 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:28.818 [ 00:19:28.818 { 00:19:28.818 "name": "BaseBdev1", 00:19:28.818 "aliases": [ 00:19:28.818 "4ec12193-29c8-4718-8b2b-fca22922275c" 00:19:28.818 ], 00:19:28.818 "product_name": "Malloc disk", 00:19:28.818 "block_size": 4096, 00:19:28.818 "num_blocks": 8192, 00:19:28.818 "uuid": "4ec12193-29c8-4718-8b2b-fca22922275c", 00:19:28.818 "assigned_rate_limits": { 00:19:28.818 "rw_ios_per_sec": 0, 00:19:28.818 "rw_mbytes_per_sec": 0, 00:19:28.818 "r_mbytes_per_sec": 0, 00:19:28.818 "w_mbytes_per_sec": 0 00:19:28.818 }, 00:19:28.818 "claimed": true, 00:19:28.818 "claim_type": "exclusive_write", 00:19:28.818 "zoned": false, 00:19:28.818 "supported_io_types": { 00:19:28.818 "read": true, 00:19:28.818 "write": true, 00:19:28.818 "unmap": true, 00:19:28.818 "flush": true, 00:19:28.818 "reset": true, 00:19:28.818 "nvme_admin": false, 00:19:28.818 "nvme_io": false, 00:19:28.818 "nvme_io_md": false, 00:19:28.818 "write_zeroes": true, 00:19:28.818 "zcopy": true, 00:19:28.818 "get_zone_info": false, 00:19:28.818 "zone_management": false, 00:19:28.818 "zone_append": false, 00:19:28.818 "compare": false, 00:19:28.818 "compare_and_write": false, 00:19:28.818 "abort": true, 00:19:28.818 "seek_hole": false, 00:19:28.818 "seek_data": false, 00:19:28.818 "copy": true, 00:19:28.818 "nvme_iov_md": false 00:19:28.818 }, 00:19:28.818 "memory_domains": [ 00:19:28.818 { 00:19:28.818 "dma_device_id": "system", 00:19:28.818 "dma_device_type": 1 00:19:28.818 }, 00:19:28.818 { 00:19:28.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.818 "dma_device_type": 2 00:19:28.818 } 00:19:28.818 ], 00:19:28.818 "driver_specific": {} 00:19:28.818 } 00:19:28.818 ] 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.818 "name": "Existed_Raid", 00:19:28.818 "uuid": "552c0afd-4502-4dae-8f8b-e6872044036f", 00:19:28.818 "strip_size_kb": 0, 00:19:28.818 "state": "configuring", 00:19:28.818 "raid_level": "raid1", 00:19:28.818 "superblock": true, 00:19:28.818 "num_base_bdevs": 2, 00:19:28.818 "num_base_bdevs_discovered": 1, 00:19:28.818 "num_base_bdevs_operational": 2, 00:19:28.818 "base_bdevs_list": [ 00:19:28.818 { 00:19:28.818 "name": "BaseBdev1", 00:19:28.818 "uuid": "4ec12193-29c8-4718-8b2b-fca22922275c", 00:19:28.818 "is_configured": true, 00:19:28.818 "data_offset": 256, 00:19:28.818 "data_size": 7936 00:19:28.818 }, 00:19:28.818 { 00:19:28.818 "name": "BaseBdev2", 00:19:28.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.818 "is_configured": false, 00:19:28.818 "data_offset": 0, 00:19:28.818 "data_size": 0 00:19:28.818 } 00:19:28.818 ] 00:19:28.818 }' 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.818 14:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.386 [2024-11-20 14:30:08.273373] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:29.386 [2024-11-20 14:30:08.273444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.386 [2024-11-20 14:30:08.281429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:29.386 [2024-11-20 14:30:08.283978] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:29.386 [2024-11-20 14:30:08.284072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.386 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.386 "name": "Existed_Raid", 00:19:29.386 "uuid": "999ae207-20e9-43d5-a699-1357f015554b", 00:19:29.386 "strip_size_kb": 0, 00:19:29.386 "state": "configuring", 00:19:29.386 "raid_level": "raid1", 00:19:29.386 "superblock": true, 00:19:29.386 "num_base_bdevs": 2, 00:19:29.386 "num_base_bdevs_discovered": 1, 00:19:29.386 "num_base_bdevs_operational": 2, 00:19:29.386 "base_bdevs_list": [ 00:19:29.386 { 00:19:29.386 "name": "BaseBdev1", 00:19:29.386 "uuid": "4ec12193-29c8-4718-8b2b-fca22922275c", 00:19:29.386 "is_configured": true, 00:19:29.386 "data_offset": 256, 00:19:29.387 "data_size": 7936 00:19:29.387 }, 00:19:29.387 { 00:19:29.387 "name": "BaseBdev2", 00:19:29.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.387 "is_configured": false, 00:19:29.387 "data_offset": 0, 00:19:29.387 "data_size": 0 00:19:29.387 } 00:19:29.387 ] 00:19:29.387 }' 00:19:29.387 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.387 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.954 [2024-11-20 14:30:08.849296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:29.954 [2024-11-20 14:30:08.849659] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:29.954 [2024-11-20 14:30:08.849682] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:29.954 BaseBdev2 00:19:29.954 [2024-11-20 14:30:08.850028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:29.954 [2024-11-20 14:30:08.850269] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:29.954 [2024-11-20 14:30:08.850303] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:29.954 [2024-11-20 14:30:08.850482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.954 [ 00:19:29.954 { 00:19:29.954 "name": "BaseBdev2", 00:19:29.954 "aliases": [ 00:19:29.954 "ee61987d-646b-4ae5-8cb6-9046a95571e0" 00:19:29.954 ], 00:19:29.954 "product_name": "Malloc disk", 00:19:29.954 "block_size": 4096, 00:19:29.954 "num_blocks": 8192, 00:19:29.954 "uuid": "ee61987d-646b-4ae5-8cb6-9046a95571e0", 00:19:29.954 "assigned_rate_limits": { 00:19:29.954 "rw_ios_per_sec": 0, 00:19:29.954 "rw_mbytes_per_sec": 0, 00:19:29.954 "r_mbytes_per_sec": 0, 00:19:29.954 "w_mbytes_per_sec": 0 00:19:29.954 }, 00:19:29.954 "claimed": true, 00:19:29.954 "claim_type": "exclusive_write", 00:19:29.954 "zoned": false, 00:19:29.954 "supported_io_types": { 00:19:29.954 "read": true, 00:19:29.954 "write": true, 00:19:29.954 "unmap": true, 00:19:29.954 "flush": true, 00:19:29.954 "reset": true, 00:19:29.954 "nvme_admin": false, 00:19:29.954 "nvme_io": false, 00:19:29.954 "nvme_io_md": false, 00:19:29.954 "write_zeroes": true, 00:19:29.954 "zcopy": true, 00:19:29.954 "get_zone_info": false, 00:19:29.954 "zone_management": false, 00:19:29.954 "zone_append": false, 00:19:29.954 "compare": false, 00:19:29.954 "compare_and_write": false, 00:19:29.954 "abort": true, 00:19:29.954 "seek_hole": false, 00:19:29.954 "seek_data": false, 00:19:29.954 "copy": true, 00:19:29.954 "nvme_iov_md": false 00:19:29.954 }, 00:19:29.954 "memory_domains": [ 00:19:29.954 { 00:19:29.954 "dma_device_id": "system", 00:19:29.954 "dma_device_type": 1 00:19:29.954 }, 00:19:29.954 { 00:19:29.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.954 "dma_device_type": 2 00:19:29.954 } 00:19:29.954 ], 00:19:29.954 "driver_specific": {} 00:19:29.954 } 00:19:29.954 ] 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.954 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.955 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.955 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.955 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.955 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.955 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.213 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.213 "name": "Existed_Raid", 00:19:30.213 "uuid": "999ae207-20e9-43d5-a699-1357f015554b", 00:19:30.213 "strip_size_kb": 0, 00:19:30.213 "state": "online", 00:19:30.213 "raid_level": "raid1", 00:19:30.213 "superblock": true, 00:19:30.213 "num_base_bdevs": 2, 00:19:30.213 "num_base_bdevs_discovered": 2, 00:19:30.213 "num_base_bdevs_operational": 2, 00:19:30.213 "base_bdevs_list": [ 00:19:30.213 { 00:19:30.213 "name": "BaseBdev1", 00:19:30.213 "uuid": "4ec12193-29c8-4718-8b2b-fca22922275c", 00:19:30.213 "is_configured": true, 00:19:30.213 "data_offset": 256, 00:19:30.213 "data_size": 7936 00:19:30.213 }, 00:19:30.213 { 00:19:30.213 "name": "BaseBdev2", 00:19:30.213 "uuid": "ee61987d-646b-4ae5-8cb6-9046a95571e0", 00:19:30.213 "is_configured": true, 00:19:30.213 "data_offset": 256, 00:19:30.213 "data_size": 7936 00:19:30.213 } 00:19:30.213 ] 00:19:30.213 }' 00:19:30.213 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.213 14:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:30.471 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:30.471 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:30.471 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:30.471 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:30.471 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:30.471 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:30.471 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:30.471 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:30.471 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.471 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:30.471 [2024-11-20 14:30:09.433838] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:30.471 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.729 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:30.729 "name": "Existed_Raid", 00:19:30.729 "aliases": [ 00:19:30.729 "999ae207-20e9-43d5-a699-1357f015554b" 00:19:30.729 ], 00:19:30.729 "product_name": "Raid Volume", 00:19:30.729 "block_size": 4096, 00:19:30.729 "num_blocks": 7936, 00:19:30.729 "uuid": "999ae207-20e9-43d5-a699-1357f015554b", 00:19:30.729 "assigned_rate_limits": { 00:19:30.729 "rw_ios_per_sec": 0, 00:19:30.729 "rw_mbytes_per_sec": 0, 00:19:30.729 "r_mbytes_per_sec": 0, 00:19:30.729 "w_mbytes_per_sec": 0 00:19:30.729 }, 00:19:30.729 "claimed": false, 00:19:30.729 "zoned": false, 00:19:30.729 "supported_io_types": { 00:19:30.729 "read": true, 00:19:30.729 "write": true, 00:19:30.729 "unmap": false, 00:19:30.729 "flush": false, 00:19:30.729 "reset": true, 00:19:30.729 "nvme_admin": false, 00:19:30.729 "nvme_io": false, 00:19:30.729 "nvme_io_md": false, 00:19:30.729 "write_zeroes": true, 00:19:30.729 "zcopy": false, 00:19:30.729 "get_zone_info": false, 00:19:30.729 "zone_management": false, 00:19:30.729 "zone_append": false, 00:19:30.729 "compare": false, 00:19:30.729 "compare_and_write": false, 00:19:30.729 "abort": false, 00:19:30.729 "seek_hole": false, 00:19:30.729 "seek_data": false, 00:19:30.729 "copy": false, 00:19:30.729 "nvme_iov_md": false 00:19:30.729 }, 00:19:30.729 "memory_domains": [ 00:19:30.729 { 00:19:30.729 "dma_device_id": "system", 00:19:30.729 "dma_device_type": 1 00:19:30.729 }, 00:19:30.729 { 00:19:30.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.729 "dma_device_type": 2 00:19:30.729 }, 00:19:30.729 { 00:19:30.729 "dma_device_id": "system", 00:19:30.729 "dma_device_type": 1 00:19:30.729 }, 00:19:30.729 { 00:19:30.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.729 "dma_device_type": 2 00:19:30.729 } 00:19:30.729 ], 00:19:30.729 "driver_specific": { 00:19:30.729 "raid": { 00:19:30.729 "uuid": "999ae207-20e9-43d5-a699-1357f015554b", 00:19:30.729 "strip_size_kb": 0, 00:19:30.729 "state": "online", 00:19:30.729 "raid_level": "raid1", 00:19:30.729 "superblock": true, 00:19:30.729 "num_base_bdevs": 2, 00:19:30.729 "num_base_bdevs_discovered": 2, 00:19:30.729 "num_base_bdevs_operational": 2, 00:19:30.729 "base_bdevs_list": [ 00:19:30.729 { 00:19:30.729 "name": "BaseBdev1", 00:19:30.729 "uuid": "4ec12193-29c8-4718-8b2b-fca22922275c", 00:19:30.729 "is_configured": true, 00:19:30.729 "data_offset": 256, 00:19:30.729 "data_size": 7936 00:19:30.729 }, 00:19:30.729 { 00:19:30.729 "name": "BaseBdev2", 00:19:30.729 "uuid": "ee61987d-646b-4ae5-8cb6-9046a95571e0", 00:19:30.729 "is_configured": true, 00:19:30.729 "data_offset": 256, 00:19:30.729 "data_size": 7936 00:19:30.729 } 00:19:30.729 ] 00:19:30.729 } 00:19:30.729 } 00:19:30.729 }' 00:19:30.729 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:30.729 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:30.729 BaseBdev2' 00:19:30.729 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:30.729 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:30.729 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:30.729 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:30.729 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.729 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:30.729 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:30.729 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.729 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:30.729 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:30.729 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:30.729 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:30.729 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:30.729 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.729 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:30.729 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.729 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:30.729 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:30.729 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:30.729 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.729 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:30.729 [2024-11-20 14:30:09.705653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:30.987 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.987 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:30.987 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:30.987 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:30.987 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:30.988 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:30.988 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:30.988 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:30.988 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:30.988 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:30.988 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:30.988 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:30.988 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.988 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.988 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.988 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.988 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.988 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.988 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.988 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:30.988 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.988 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.988 "name": "Existed_Raid", 00:19:30.988 "uuid": "999ae207-20e9-43d5-a699-1357f015554b", 00:19:30.988 "strip_size_kb": 0, 00:19:30.988 "state": "online", 00:19:30.988 "raid_level": "raid1", 00:19:30.988 "superblock": true, 00:19:30.988 "num_base_bdevs": 2, 00:19:30.988 "num_base_bdevs_discovered": 1, 00:19:30.988 "num_base_bdevs_operational": 1, 00:19:30.988 "base_bdevs_list": [ 00:19:30.988 { 00:19:30.988 "name": null, 00:19:30.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.988 "is_configured": false, 00:19:30.988 "data_offset": 0, 00:19:30.988 "data_size": 7936 00:19:30.988 }, 00:19:30.988 { 00:19:30.988 "name": "BaseBdev2", 00:19:30.988 "uuid": "ee61987d-646b-4ae5-8cb6-9046a95571e0", 00:19:30.988 "is_configured": true, 00:19:30.988 "data_offset": 256, 00:19:30.988 "data_size": 7936 00:19:30.988 } 00:19:30.988 ] 00:19:30.988 }' 00:19:30.988 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.988 14:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.555 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:31.555 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:31.555 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.555 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.555 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.555 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:31.555 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.555 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:31.555 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:31.555 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:31.555 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.555 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.555 [2024-11-20 14:30:10.398593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:31.555 [2024-11-20 14:30:10.398969] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:31.555 [2024-11-20 14:30:10.488971] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:31.555 [2024-11-20 14:30:10.489310] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:31.555 [2024-11-20 14:30:10.489496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:31.555 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.555 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:31.555 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:31.555 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.555 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.555 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.555 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:31.555 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.813 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:31.813 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:31.813 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:31.813 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86382 00:19:31.813 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86382 ']' 00:19:31.813 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86382 00:19:31.813 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:19:31.813 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.813 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86382 00:19:31.813 killing process with pid 86382 00:19:31.813 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:31.813 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:31.813 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86382' 00:19:31.813 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86382 00:19:31.813 [2024-11-20 14:30:10.584854] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:31.813 14:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86382 00:19:31.813 [2024-11-20 14:30:10.600090] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:32.750 14:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:19:32.751 00:19:32.751 real 0m5.802s 00:19:32.751 user 0m8.826s 00:19:32.751 sys 0m0.786s 00:19:32.751 ************************************ 00:19:32.751 END TEST raid_state_function_test_sb_4k 00:19:32.751 ************************************ 00:19:32.751 14:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:32.751 14:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:33.010 14:30:11 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:19:33.010 14:30:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:33.010 14:30:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:33.010 14:30:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:33.010 ************************************ 00:19:33.010 START TEST raid_superblock_test_4k 00:19:33.010 ************************************ 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:33.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86636 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86636 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86636 ']' 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:33.010 14:30:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:33.010 [2024-11-20 14:30:11.854505] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:19:33.010 [2024-11-20 14:30:11.854955] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86636 ] 00:19:33.269 [2024-11-20 14:30:12.028251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.269 [2024-11-20 14:30:12.159527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.528 [2024-11-20 14:30:12.363907] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:33.528 [2024-11-20 14:30:12.364016] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:34.095 14:30:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.095 14:30:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:19:34.095 14:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:34.095 14:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:34.095 14:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:34.095 14:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:34.095 14:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:34.095 14:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:34.095 14:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:34.095 14:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:34.095 14:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:19:34.095 14:30:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.095 14:30:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.095 malloc1 00:19:34.095 14:30:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.095 14:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:34.095 14:30:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.095 14:30:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.095 [2024-11-20 14:30:12.942777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:34.095 [2024-11-20 14:30:12.943081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.095 [2024-11-20 14:30:12.943167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:34.095 [2024-11-20 14:30:12.943426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.095 [2024-11-20 14:30:12.946340] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.095 [2024-11-20 14:30:12.946517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:34.095 pt1 00:19:34.095 14:30:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.095 14:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:34.095 14:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:34.096 14:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:34.096 14:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:34.096 14:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:34.096 14:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:34.096 14:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:34.096 14:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:34.096 14:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:19:34.096 14:30:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.096 14:30:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.096 malloc2 00:19:34.096 14:30:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.096 14:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:34.096 14:30:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.096 14:30:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.096 [2024-11-20 14:30:13.003096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:34.096 [2024-11-20 14:30:13.003186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.096 [2024-11-20 14:30:13.003228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:34.096 [2024-11-20 14:30:13.003245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.096 [2024-11-20 14:30:13.006102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.096 [2024-11-20 14:30:13.006150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:34.096 pt2 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.096 [2024-11-20 14:30:13.015223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:34.096 [2024-11-20 14:30:13.018133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:34.096 [2024-11-20 14:30:13.018387] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:34.096 [2024-11-20 14:30:13.018413] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:34.096 [2024-11-20 14:30:13.018751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:34.096 [2024-11-20 14:30:13.018999] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:34.096 [2024-11-20 14:30:13.019030] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:34.096 [2024-11-20 14:30:13.019311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.096 "name": "raid_bdev1", 00:19:34.096 "uuid": "77f02f96-467e-4a58-aa68-cf3677cc7bab", 00:19:34.096 "strip_size_kb": 0, 00:19:34.096 "state": "online", 00:19:34.096 "raid_level": "raid1", 00:19:34.096 "superblock": true, 00:19:34.096 "num_base_bdevs": 2, 00:19:34.096 "num_base_bdevs_discovered": 2, 00:19:34.096 "num_base_bdevs_operational": 2, 00:19:34.096 "base_bdevs_list": [ 00:19:34.096 { 00:19:34.096 "name": "pt1", 00:19:34.096 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:34.096 "is_configured": true, 00:19:34.096 "data_offset": 256, 00:19:34.096 "data_size": 7936 00:19:34.096 }, 00:19:34.096 { 00:19:34.096 "name": "pt2", 00:19:34.096 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:34.096 "is_configured": true, 00:19:34.096 "data_offset": 256, 00:19:34.096 "data_size": 7936 00:19:34.096 } 00:19:34.096 ] 00:19:34.096 }' 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.096 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.662 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:34.662 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:34.662 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:34.662 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:34.662 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:34.662 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:34.662 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:34.662 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:34.662 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.662 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.662 [2024-11-20 14:30:13.535769] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:34.662 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.662 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:34.662 "name": "raid_bdev1", 00:19:34.662 "aliases": [ 00:19:34.662 "77f02f96-467e-4a58-aa68-cf3677cc7bab" 00:19:34.662 ], 00:19:34.662 "product_name": "Raid Volume", 00:19:34.662 "block_size": 4096, 00:19:34.662 "num_blocks": 7936, 00:19:34.662 "uuid": "77f02f96-467e-4a58-aa68-cf3677cc7bab", 00:19:34.662 "assigned_rate_limits": { 00:19:34.662 "rw_ios_per_sec": 0, 00:19:34.662 "rw_mbytes_per_sec": 0, 00:19:34.662 "r_mbytes_per_sec": 0, 00:19:34.662 "w_mbytes_per_sec": 0 00:19:34.662 }, 00:19:34.662 "claimed": false, 00:19:34.662 "zoned": false, 00:19:34.662 "supported_io_types": { 00:19:34.662 "read": true, 00:19:34.662 "write": true, 00:19:34.662 "unmap": false, 00:19:34.662 "flush": false, 00:19:34.662 "reset": true, 00:19:34.662 "nvme_admin": false, 00:19:34.662 "nvme_io": false, 00:19:34.662 "nvme_io_md": false, 00:19:34.662 "write_zeroes": true, 00:19:34.663 "zcopy": false, 00:19:34.663 "get_zone_info": false, 00:19:34.663 "zone_management": false, 00:19:34.663 "zone_append": false, 00:19:34.663 "compare": false, 00:19:34.663 "compare_and_write": false, 00:19:34.663 "abort": false, 00:19:34.663 "seek_hole": false, 00:19:34.663 "seek_data": false, 00:19:34.663 "copy": false, 00:19:34.663 "nvme_iov_md": false 00:19:34.663 }, 00:19:34.663 "memory_domains": [ 00:19:34.663 { 00:19:34.663 "dma_device_id": "system", 00:19:34.663 "dma_device_type": 1 00:19:34.663 }, 00:19:34.663 { 00:19:34.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.663 "dma_device_type": 2 00:19:34.663 }, 00:19:34.663 { 00:19:34.663 "dma_device_id": "system", 00:19:34.663 "dma_device_type": 1 00:19:34.663 }, 00:19:34.663 { 00:19:34.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.663 "dma_device_type": 2 00:19:34.663 } 00:19:34.663 ], 00:19:34.663 "driver_specific": { 00:19:34.663 "raid": { 00:19:34.663 "uuid": "77f02f96-467e-4a58-aa68-cf3677cc7bab", 00:19:34.663 "strip_size_kb": 0, 00:19:34.663 "state": "online", 00:19:34.663 "raid_level": "raid1", 00:19:34.663 "superblock": true, 00:19:34.663 "num_base_bdevs": 2, 00:19:34.663 "num_base_bdevs_discovered": 2, 00:19:34.663 "num_base_bdevs_operational": 2, 00:19:34.663 "base_bdevs_list": [ 00:19:34.663 { 00:19:34.663 "name": "pt1", 00:19:34.663 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:34.663 "is_configured": true, 00:19:34.663 "data_offset": 256, 00:19:34.663 "data_size": 7936 00:19:34.663 }, 00:19:34.663 { 00:19:34.663 "name": "pt2", 00:19:34.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:34.663 "is_configured": true, 00:19:34.663 "data_offset": 256, 00:19:34.663 "data_size": 7936 00:19:34.663 } 00:19:34.663 ] 00:19:34.663 } 00:19:34.663 } 00:19:34.663 }' 00:19:34.663 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:34.663 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:34.663 pt2' 00:19:34.663 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.922 [2024-11-20 14:30:13.807946] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=77f02f96-467e-4a58-aa68-cf3677cc7bab 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 77f02f96-467e-4a58-aa68-cf3677cc7bab ']' 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.922 [2024-11-20 14:30:13.855456] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:34.922 [2024-11-20 14:30:13.855497] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:34.922 [2024-11-20 14:30:13.855607] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:34.922 [2024-11-20 14:30:13.855690] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:34.922 [2024-11-20 14:30:13.855712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.922 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.181 [2024-11-20 14:30:13.983534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:35.181 [2024-11-20 14:30:13.986251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:35.181 [2024-11-20 14:30:13.986363] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:35.181 [2024-11-20 14:30:13.986462] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:35.181 [2024-11-20 14:30:13.986501] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:35.181 [2024-11-20 14:30:13.986523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:35.181 request: 00:19:35.181 { 00:19:35.181 "name": "raid_bdev1", 00:19:35.181 "raid_level": "raid1", 00:19:35.181 "base_bdevs": [ 00:19:35.181 "malloc1", 00:19:35.181 "malloc2" 00:19:35.181 ], 00:19:35.181 "superblock": false, 00:19:35.181 "method": "bdev_raid_create", 00:19:35.181 "req_id": 1 00:19:35.181 } 00:19:35.181 Got JSON-RPC error response 00:19:35.181 response: 00:19:35.181 { 00:19:35.181 "code": -17, 00:19:35.181 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:35.181 } 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:35.181 14:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.181 14:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.181 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:35.181 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:35.181 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:35.181 14:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.181 14:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.181 [2024-11-20 14:30:14.055606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:35.181 [2024-11-20 14:30:14.055899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.181 [2024-11-20 14:30:14.056001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:35.181 [2024-11-20 14:30:14.056244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.181 [2024-11-20 14:30:14.059433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.181 [2024-11-20 14:30:14.059616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:35.181 [2024-11-20 14:30:14.059869] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:35.181 [2024-11-20 14:30:14.060067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:35.181 pt1 00:19:35.181 14:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.181 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:35.181 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.181 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:35.181 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.182 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.182 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:35.182 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.182 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.182 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.182 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.182 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.182 14:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.182 14:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.182 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.182 14:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.182 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.182 "name": "raid_bdev1", 00:19:35.182 "uuid": "77f02f96-467e-4a58-aa68-cf3677cc7bab", 00:19:35.182 "strip_size_kb": 0, 00:19:35.182 "state": "configuring", 00:19:35.182 "raid_level": "raid1", 00:19:35.182 "superblock": true, 00:19:35.182 "num_base_bdevs": 2, 00:19:35.182 "num_base_bdevs_discovered": 1, 00:19:35.182 "num_base_bdevs_operational": 2, 00:19:35.182 "base_bdevs_list": [ 00:19:35.182 { 00:19:35.182 "name": "pt1", 00:19:35.182 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:35.182 "is_configured": true, 00:19:35.182 "data_offset": 256, 00:19:35.182 "data_size": 7936 00:19:35.182 }, 00:19:35.182 { 00:19:35.182 "name": null, 00:19:35.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:35.182 "is_configured": false, 00:19:35.182 "data_offset": 256, 00:19:35.182 "data_size": 7936 00:19:35.182 } 00:19:35.182 ] 00:19:35.182 }' 00:19:35.182 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.182 14:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.747 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.748 [2024-11-20 14:30:14.624158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:35.748 [2024-11-20 14:30:14.624258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.748 [2024-11-20 14:30:14.624294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:35.748 [2024-11-20 14:30:14.624313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.748 [2024-11-20 14:30:14.624885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.748 [2024-11-20 14:30:14.624926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:35.748 [2024-11-20 14:30:14.625048] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:35.748 [2024-11-20 14:30:14.625092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:35.748 [2024-11-20 14:30:14.625244] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:35.748 [2024-11-20 14:30:14.625280] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:35.748 [2024-11-20 14:30:14.625589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:35.748 [2024-11-20 14:30:14.625778] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:35.748 [2024-11-20 14:30:14.625802] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:35.748 [2024-11-20 14:30:14.626009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.748 pt2 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.748 "name": "raid_bdev1", 00:19:35.748 "uuid": "77f02f96-467e-4a58-aa68-cf3677cc7bab", 00:19:35.748 "strip_size_kb": 0, 00:19:35.748 "state": "online", 00:19:35.748 "raid_level": "raid1", 00:19:35.748 "superblock": true, 00:19:35.748 "num_base_bdevs": 2, 00:19:35.748 "num_base_bdevs_discovered": 2, 00:19:35.748 "num_base_bdevs_operational": 2, 00:19:35.748 "base_bdevs_list": [ 00:19:35.748 { 00:19:35.748 "name": "pt1", 00:19:35.748 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:35.748 "is_configured": true, 00:19:35.748 "data_offset": 256, 00:19:35.748 "data_size": 7936 00:19:35.748 }, 00:19:35.748 { 00:19:35.748 "name": "pt2", 00:19:35.748 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:35.748 "is_configured": true, 00:19:35.748 "data_offset": 256, 00:19:35.748 "data_size": 7936 00:19:35.748 } 00:19:35.748 ] 00:19:35.748 }' 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.748 14:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.315 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:36.315 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:36.315 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:36.315 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:36.315 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:36.315 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:36.315 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:36.315 14:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.315 14:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.315 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:36.315 [2024-11-20 14:30:15.184712] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:36.315 14:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.315 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:36.315 "name": "raid_bdev1", 00:19:36.315 "aliases": [ 00:19:36.315 "77f02f96-467e-4a58-aa68-cf3677cc7bab" 00:19:36.315 ], 00:19:36.315 "product_name": "Raid Volume", 00:19:36.315 "block_size": 4096, 00:19:36.315 "num_blocks": 7936, 00:19:36.315 "uuid": "77f02f96-467e-4a58-aa68-cf3677cc7bab", 00:19:36.315 "assigned_rate_limits": { 00:19:36.315 "rw_ios_per_sec": 0, 00:19:36.315 "rw_mbytes_per_sec": 0, 00:19:36.315 "r_mbytes_per_sec": 0, 00:19:36.315 "w_mbytes_per_sec": 0 00:19:36.315 }, 00:19:36.315 "claimed": false, 00:19:36.315 "zoned": false, 00:19:36.315 "supported_io_types": { 00:19:36.315 "read": true, 00:19:36.315 "write": true, 00:19:36.315 "unmap": false, 00:19:36.315 "flush": false, 00:19:36.315 "reset": true, 00:19:36.315 "nvme_admin": false, 00:19:36.315 "nvme_io": false, 00:19:36.315 "nvme_io_md": false, 00:19:36.315 "write_zeroes": true, 00:19:36.315 "zcopy": false, 00:19:36.315 "get_zone_info": false, 00:19:36.315 "zone_management": false, 00:19:36.315 "zone_append": false, 00:19:36.315 "compare": false, 00:19:36.315 "compare_and_write": false, 00:19:36.315 "abort": false, 00:19:36.315 "seek_hole": false, 00:19:36.315 "seek_data": false, 00:19:36.315 "copy": false, 00:19:36.315 "nvme_iov_md": false 00:19:36.315 }, 00:19:36.315 "memory_domains": [ 00:19:36.315 { 00:19:36.315 "dma_device_id": "system", 00:19:36.315 "dma_device_type": 1 00:19:36.315 }, 00:19:36.315 { 00:19:36.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.315 "dma_device_type": 2 00:19:36.315 }, 00:19:36.315 { 00:19:36.315 "dma_device_id": "system", 00:19:36.315 "dma_device_type": 1 00:19:36.315 }, 00:19:36.315 { 00:19:36.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.315 "dma_device_type": 2 00:19:36.315 } 00:19:36.315 ], 00:19:36.315 "driver_specific": { 00:19:36.315 "raid": { 00:19:36.315 "uuid": "77f02f96-467e-4a58-aa68-cf3677cc7bab", 00:19:36.315 "strip_size_kb": 0, 00:19:36.315 "state": "online", 00:19:36.315 "raid_level": "raid1", 00:19:36.315 "superblock": true, 00:19:36.315 "num_base_bdevs": 2, 00:19:36.315 "num_base_bdevs_discovered": 2, 00:19:36.315 "num_base_bdevs_operational": 2, 00:19:36.315 "base_bdevs_list": [ 00:19:36.315 { 00:19:36.315 "name": "pt1", 00:19:36.315 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:36.315 "is_configured": true, 00:19:36.315 "data_offset": 256, 00:19:36.315 "data_size": 7936 00:19:36.315 }, 00:19:36.315 { 00:19:36.315 "name": "pt2", 00:19:36.315 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:36.315 "is_configured": true, 00:19:36.315 "data_offset": 256, 00:19:36.315 "data_size": 7936 00:19:36.315 } 00:19:36.315 ] 00:19:36.315 } 00:19:36.315 } 00:19:36.315 }' 00:19:36.315 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:36.315 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:36.315 pt2' 00:19:36.315 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:36.574 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:36.574 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:36.574 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:36.575 [2024-11-20 14:30:15.452675] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 77f02f96-467e-4a58-aa68-cf3677cc7bab '!=' 77f02f96-467e-4a58-aa68-cf3677cc7bab ']' 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.575 [2024-11-20 14:30:15.504515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.575 14:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.833 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.834 "name": "raid_bdev1", 00:19:36.834 "uuid": "77f02f96-467e-4a58-aa68-cf3677cc7bab", 00:19:36.834 "strip_size_kb": 0, 00:19:36.834 "state": "online", 00:19:36.834 "raid_level": "raid1", 00:19:36.834 "superblock": true, 00:19:36.834 "num_base_bdevs": 2, 00:19:36.834 "num_base_bdevs_discovered": 1, 00:19:36.834 "num_base_bdevs_operational": 1, 00:19:36.834 "base_bdevs_list": [ 00:19:36.834 { 00:19:36.834 "name": null, 00:19:36.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.834 "is_configured": false, 00:19:36.834 "data_offset": 0, 00:19:36.834 "data_size": 7936 00:19:36.834 }, 00:19:36.834 { 00:19:36.834 "name": "pt2", 00:19:36.834 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:36.834 "is_configured": true, 00:19:36.834 "data_offset": 256, 00:19:36.834 "data_size": 7936 00:19:36.834 } 00:19:36.834 ] 00:19:36.834 }' 00:19:36.834 14:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.834 14:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.093 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:37.093 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.093 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.093 [2024-11-20 14:30:16.028566] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:37.093 [2024-11-20 14:30:16.028801] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:37.093 [2024-11-20 14:30:16.028938] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:37.093 [2024-11-20 14:30:16.029027] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:37.093 [2024-11-20 14:30:16.029050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:37.093 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.093 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.093 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:37.093 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.093 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.093 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.350 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:37.350 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:37.350 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:37.350 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:37.350 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:37.350 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.350 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.350 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.350 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.351 [2024-11-20 14:30:16.112539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:37.351 [2024-11-20 14:30:16.112621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.351 [2024-11-20 14:30:16.112650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:37.351 [2024-11-20 14:30:16.112669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.351 [2024-11-20 14:30:16.115775] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.351 [2024-11-20 14:30:16.115831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:37.351 [2024-11-20 14:30:16.115944] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:37.351 [2024-11-20 14:30:16.116037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:37.351 [2024-11-20 14:30:16.116181] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:37.351 [2024-11-20 14:30:16.116205] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:37.351 [2024-11-20 14:30:16.116510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:37.351 [2024-11-20 14:30:16.116718] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:37.351 [2024-11-20 14:30:16.116736] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:37.351 [2024-11-20 14:30:16.116980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.351 pt2 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.351 "name": "raid_bdev1", 00:19:37.351 "uuid": "77f02f96-467e-4a58-aa68-cf3677cc7bab", 00:19:37.351 "strip_size_kb": 0, 00:19:37.351 "state": "online", 00:19:37.351 "raid_level": "raid1", 00:19:37.351 "superblock": true, 00:19:37.351 "num_base_bdevs": 2, 00:19:37.351 "num_base_bdevs_discovered": 1, 00:19:37.351 "num_base_bdevs_operational": 1, 00:19:37.351 "base_bdevs_list": [ 00:19:37.351 { 00:19:37.351 "name": null, 00:19:37.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.351 "is_configured": false, 00:19:37.351 "data_offset": 256, 00:19:37.351 "data_size": 7936 00:19:37.351 }, 00:19:37.351 { 00:19:37.351 "name": "pt2", 00:19:37.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:37.351 "is_configured": true, 00:19:37.351 "data_offset": 256, 00:19:37.351 "data_size": 7936 00:19:37.351 } 00:19:37.351 ] 00:19:37.351 }' 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.351 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.917 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:37.917 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.917 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.917 [2024-11-20 14:30:16.633056] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:37.917 [2024-11-20 14:30:16.633097] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:37.917 [2024-11-20 14:30:16.633190] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:37.917 [2024-11-20 14:30:16.633266] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:37.917 [2024-11-20 14:30:16.633283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:37.917 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.917 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.917 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.917 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.917 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:37.917 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.917 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:37.917 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:37.917 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:37.917 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:37.917 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.917 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.917 [2024-11-20 14:30:16.693118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:37.918 [2024-11-20 14:30:16.693207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.918 [2024-11-20 14:30:16.693241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:37.918 [2024-11-20 14:30:16.693258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.918 [2024-11-20 14:30:16.696244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.918 [2024-11-20 14:30:16.696432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:37.918 [2024-11-20 14:30:16.696569] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:37.918 [2024-11-20 14:30:16.696635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:37.918 [2024-11-20 14:30:16.696827] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:37.918 [2024-11-20 14:30:16.696848] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:37.918 [2024-11-20 14:30:16.696871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:37.918 [2024-11-20 14:30:16.696942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:37.918 [2024-11-20 14:30:16.697069] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:37.918 [2024-11-20 14:30:16.697086] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:37.918 [2024-11-20 14:30:16.697437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:37.918 [2024-11-20 14:30:16.697676] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:37.918 [2024-11-20 14:30:16.697704] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:37.918 pt1 00:19:37.918 [2024-11-20 14:30:16.698044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.918 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.918 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:37.918 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:37.918 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:37.918 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:37.918 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:37.918 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:37.918 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:37.918 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.918 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.918 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.918 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.918 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.918 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.918 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.918 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.918 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.918 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.918 "name": "raid_bdev1", 00:19:37.918 "uuid": "77f02f96-467e-4a58-aa68-cf3677cc7bab", 00:19:37.918 "strip_size_kb": 0, 00:19:37.918 "state": "online", 00:19:37.918 "raid_level": "raid1", 00:19:37.918 "superblock": true, 00:19:37.918 "num_base_bdevs": 2, 00:19:37.918 "num_base_bdevs_discovered": 1, 00:19:37.918 "num_base_bdevs_operational": 1, 00:19:37.918 "base_bdevs_list": [ 00:19:37.918 { 00:19:37.918 "name": null, 00:19:37.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.918 "is_configured": false, 00:19:37.918 "data_offset": 256, 00:19:37.918 "data_size": 7936 00:19:37.918 }, 00:19:37.918 { 00:19:37.918 "name": "pt2", 00:19:37.918 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:37.918 "is_configured": true, 00:19:37.918 "data_offset": 256, 00:19:37.918 "data_size": 7936 00:19:37.918 } 00:19:37.918 ] 00:19:37.918 }' 00:19:37.918 14:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.918 14:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:38.483 14:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:38.484 14:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.484 14:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:38.484 14:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:38.484 14:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.484 14:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:38.484 14:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:38.484 14:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.484 14:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:38.484 14:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:38.484 [2024-11-20 14:30:17.301646] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:38.484 14:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.484 14:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 77f02f96-467e-4a58-aa68-cf3677cc7bab '!=' 77f02f96-467e-4a58-aa68-cf3677cc7bab ']' 00:19:38.484 14:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86636 00:19:38.484 14:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86636 ']' 00:19:38.484 14:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86636 00:19:38.484 14:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:19:38.484 14:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.484 14:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86636 00:19:38.484 14:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:38.484 14:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:38.484 14:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86636' 00:19:38.484 killing process with pid 86636 00:19:38.484 14:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86636 00:19:38.484 [2024-11-20 14:30:17.380269] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:38.484 14:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86636 00:19:38.484 [2024-11-20 14:30:17.380541] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:38.484 [2024-11-20 14:30:17.380713] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:38.484 [2024-11-20 14:30:17.380903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:38.741 [2024-11-20 14:30:17.582732] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:39.679 14:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:19:39.679 00:19:39.679 real 0m6.880s 00:19:39.679 user 0m10.905s 00:19:39.679 sys 0m0.993s 00:19:39.679 ************************************ 00:19:39.679 END TEST raid_superblock_test_4k 00:19:39.679 ************************************ 00:19:39.679 14:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:39.679 14:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:39.956 14:30:18 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:19:39.956 14:30:18 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:19:39.956 14:30:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:39.956 14:30:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.956 14:30:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:39.956 ************************************ 00:19:39.956 START TEST raid_rebuild_test_sb_4k 00:19:39.956 ************************************ 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86964 00:19:39.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86964 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86964 ']' 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.956 14:30:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:39.956 [2024-11-20 14:30:18.814464] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:19:39.956 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:39.956 Zero copy mechanism will not be used. 00:19:39.956 [2024-11-20 14:30:18.814821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86964 ] 00:19:40.225 [2024-11-20 14:30:19.003576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.225 [2024-11-20 14:30:19.158022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.484 [2024-11-20 14:30:19.377668] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:40.484 [2024-11-20 14:30:19.377726] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:41.052 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:41.052 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:19:41.052 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:41.052 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:19:41.052 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.052 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.052 BaseBdev1_malloc 00:19:41.052 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.052 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:41.052 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.052 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.052 [2024-11-20 14:30:19.816081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:41.052 [2024-11-20 14:30:19.816294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.052 [2024-11-20 14:30:19.816335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:41.052 [2024-11-20 14:30:19.816356] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.052 [2024-11-20 14:30:19.819108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.052 [2024-11-20 14:30:19.819160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:41.052 BaseBdev1 00:19:41.052 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.052 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:41.052 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:19:41.052 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.052 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.052 BaseBdev2_malloc 00:19:41.052 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.052 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:41.052 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.052 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.052 [2024-11-20 14:30:19.864765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:41.053 [2024-11-20 14:30:19.864997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.053 [2024-11-20 14:30:19.865041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:41.053 [2024-11-20 14:30:19.865060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.053 [2024-11-20 14:30:19.867838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.053 [2024-11-20 14:30:19.867890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:41.053 BaseBdev2 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.053 spare_malloc 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.053 spare_delay 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.053 [2024-11-20 14:30:19.941173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:41.053 [2024-11-20 14:30:19.941252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.053 [2024-11-20 14:30:19.941284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:41.053 [2024-11-20 14:30:19.941302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.053 [2024-11-20 14:30:19.944168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.053 [2024-11-20 14:30:19.944359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:41.053 spare 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.053 [2024-11-20 14:30:19.949250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:41.053 [2024-11-20 14:30:19.951678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:41.053 [2024-11-20 14:30:19.951919] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:41.053 [2024-11-20 14:30:19.951944] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:41.053 [2024-11-20 14:30:19.952401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:41.053 [2024-11-20 14:30:19.952759] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:41.053 [2024-11-20 14:30:19.952886] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:41.053 [2024-11-20 14:30:19.953339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.053 14:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.053 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.053 "name": "raid_bdev1", 00:19:41.053 "uuid": "7c29d69d-c669-4008-978b-0eee64735c4c", 00:19:41.053 "strip_size_kb": 0, 00:19:41.053 "state": "online", 00:19:41.053 "raid_level": "raid1", 00:19:41.053 "superblock": true, 00:19:41.053 "num_base_bdevs": 2, 00:19:41.053 "num_base_bdevs_discovered": 2, 00:19:41.053 "num_base_bdevs_operational": 2, 00:19:41.053 "base_bdevs_list": [ 00:19:41.053 { 00:19:41.053 "name": "BaseBdev1", 00:19:41.053 "uuid": "1541e9c3-45f5-5fba-b34a-66422c309e16", 00:19:41.053 "is_configured": true, 00:19:41.053 "data_offset": 256, 00:19:41.053 "data_size": 7936 00:19:41.053 }, 00:19:41.053 { 00:19:41.053 "name": "BaseBdev2", 00:19:41.053 "uuid": "c8eedcfb-a6f6-54e9-bf29-1ea3c298e4d6", 00:19:41.053 "is_configured": true, 00:19:41.053 "data_offset": 256, 00:19:41.053 "data_size": 7936 00:19:41.053 } 00:19:41.053 ] 00:19:41.053 }' 00:19:41.053 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.053 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:41.621 [2024-11-20 14:30:20.469811] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:41.621 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:41.880 [2024-11-20 14:30:20.825613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:41.880 /dev/nbd0 00:19:42.138 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:42.138 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:42.138 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:42.138 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:42.138 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:42.138 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:42.138 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:42.138 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:42.138 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:42.138 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:42.138 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:42.138 1+0 records in 00:19:42.138 1+0 records out 00:19:42.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335914 s, 12.2 MB/s 00:19:42.138 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.138 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:42.138 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.138 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:42.138 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:42.138 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:42.138 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:42.138 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:42.138 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:42.138 14:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:43.514 7936+0 records in 00:19:43.514 7936+0 records out 00:19:43.514 32505856 bytes (33 MB, 31 MiB) copied, 1.16952 s, 27.8 MB/s 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:43.514 [2024-11-20 14:30:22.340577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:43.514 [2024-11-20 14:30:22.357040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.514 "name": "raid_bdev1", 00:19:43.514 "uuid": "7c29d69d-c669-4008-978b-0eee64735c4c", 00:19:43.514 "strip_size_kb": 0, 00:19:43.514 "state": "online", 00:19:43.514 "raid_level": "raid1", 00:19:43.514 "superblock": true, 00:19:43.514 "num_base_bdevs": 2, 00:19:43.514 "num_base_bdevs_discovered": 1, 00:19:43.514 "num_base_bdevs_operational": 1, 00:19:43.514 "base_bdevs_list": [ 00:19:43.514 { 00:19:43.514 "name": null, 00:19:43.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.514 "is_configured": false, 00:19:43.514 "data_offset": 0, 00:19:43.514 "data_size": 7936 00:19:43.514 }, 00:19:43.514 { 00:19:43.514 "name": "BaseBdev2", 00:19:43.514 "uuid": "c8eedcfb-a6f6-54e9-bf29-1ea3c298e4d6", 00:19:43.514 "is_configured": true, 00:19:43.514 "data_offset": 256, 00:19:43.514 "data_size": 7936 00:19:43.514 } 00:19:43.514 ] 00:19:43.514 }' 00:19:43.514 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.515 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:44.082 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:44.082 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.082 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:44.082 [2024-11-20 14:30:22.909207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:44.082 [2024-11-20 14:30:22.925826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:44.082 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.082 14:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:44.082 [2024-11-20 14:30:22.928386] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:45.019 14:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:45.019 14:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:45.019 14:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:45.019 14:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:45.019 14:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:45.019 14:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.019 14:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.019 14:30:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.019 14:30:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:45.019 14:30:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.019 14:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:45.019 "name": "raid_bdev1", 00:19:45.019 "uuid": "7c29d69d-c669-4008-978b-0eee64735c4c", 00:19:45.019 "strip_size_kb": 0, 00:19:45.019 "state": "online", 00:19:45.019 "raid_level": "raid1", 00:19:45.019 "superblock": true, 00:19:45.019 "num_base_bdevs": 2, 00:19:45.019 "num_base_bdevs_discovered": 2, 00:19:45.019 "num_base_bdevs_operational": 2, 00:19:45.019 "process": { 00:19:45.019 "type": "rebuild", 00:19:45.019 "target": "spare", 00:19:45.019 "progress": { 00:19:45.019 "blocks": 2560, 00:19:45.019 "percent": 32 00:19:45.019 } 00:19:45.019 }, 00:19:45.019 "base_bdevs_list": [ 00:19:45.019 { 00:19:45.019 "name": "spare", 00:19:45.019 "uuid": "b528dca3-7e85-5d9a-8ab9-8c68ca526c45", 00:19:45.019 "is_configured": true, 00:19:45.019 "data_offset": 256, 00:19:45.019 "data_size": 7936 00:19:45.019 }, 00:19:45.019 { 00:19:45.019 "name": "BaseBdev2", 00:19:45.019 "uuid": "c8eedcfb-a6f6-54e9-bf29-1ea3c298e4d6", 00:19:45.019 "is_configured": true, 00:19:45.019 "data_offset": 256, 00:19:45.019 "data_size": 7936 00:19:45.019 } 00:19:45.019 ] 00:19:45.019 }' 00:19:45.019 14:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:45.289 [2024-11-20 14:30:24.101632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:45.289 [2024-11-20 14:30:24.137674] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:45.289 [2024-11-20 14:30:24.137800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:45.289 [2024-11-20 14:30:24.137826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:45.289 [2024-11-20 14:30:24.137842] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.289 "name": "raid_bdev1", 00:19:45.289 "uuid": "7c29d69d-c669-4008-978b-0eee64735c4c", 00:19:45.289 "strip_size_kb": 0, 00:19:45.289 "state": "online", 00:19:45.289 "raid_level": "raid1", 00:19:45.289 "superblock": true, 00:19:45.289 "num_base_bdevs": 2, 00:19:45.289 "num_base_bdevs_discovered": 1, 00:19:45.289 "num_base_bdevs_operational": 1, 00:19:45.289 "base_bdevs_list": [ 00:19:45.289 { 00:19:45.289 "name": null, 00:19:45.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.289 "is_configured": false, 00:19:45.289 "data_offset": 0, 00:19:45.289 "data_size": 7936 00:19:45.289 }, 00:19:45.289 { 00:19:45.289 "name": "BaseBdev2", 00:19:45.289 "uuid": "c8eedcfb-a6f6-54e9-bf29-1ea3c298e4d6", 00:19:45.289 "is_configured": true, 00:19:45.289 "data_offset": 256, 00:19:45.289 "data_size": 7936 00:19:45.289 } 00:19:45.289 ] 00:19:45.289 }' 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.289 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:45.868 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:45.868 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:45.868 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:45.868 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:45.868 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:45.868 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.868 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.868 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.868 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:45.868 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.868 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:45.868 "name": "raid_bdev1", 00:19:45.868 "uuid": "7c29d69d-c669-4008-978b-0eee64735c4c", 00:19:45.868 "strip_size_kb": 0, 00:19:45.868 "state": "online", 00:19:45.868 "raid_level": "raid1", 00:19:45.868 "superblock": true, 00:19:45.868 "num_base_bdevs": 2, 00:19:45.868 "num_base_bdevs_discovered": 1, 00:19:45.868 "num_base_bdevs_operational": 1, 00:19:45.868 "base_bdevs_list": [ 00:19:45.868 { 00:19:45.868 "name": null, 00:19:45.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.868 "is_configured": false, 00:19:45.868 "data_offset": 0, 00:19:45.868 "data_size": 7936 00:19:45.868 }, 00:19:45.868 { 00:19:45.868 "name": "BaseBdev2", 00:19:45.868 "uuid": "c8eedcfb-a6f6-54e9-bf29-1ea3c298e4d6", 00:19:45.868 "is_configured": true, 00:19:45.868 "data_offset": 256, 00:19:45.868 "data_size": 7936 00:19:45.868 } 00:19:45.868 ] 00:19:45.868 }' 00:19:45.868 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:45.868 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:45.868 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:46.127 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:46.127 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:46.127 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.127 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:46.127 [2024-11-20 14:30:24.858615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:46.127 [2024-11-20 14:30:24.874645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:46.127 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.127 14:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:46.127 [2024-11-20 14:30:24.877285] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:47.062 14:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:47.062 14:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.062 14:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:47.062 14:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:47.062 14:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.062 14:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.062 14:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.062 14:30:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.062 14:30:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:47.062 14:30:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.062 14:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.062 "name": "raid_bdev1", 00:19:47.062 "uuid": "7c29d69d-c669-4008-978b-0eee64735c4c", 00:19:47.062 "strip_size_kb": 0, 00:19:47.062 "state": "online", 00:19:47.062 "raid_level": "raid1", 00:19:47.062 "superblock": true, 00:19:47.062 "num_base_bdevs": 2, 00:19:47.062 "num_base_bdevs_discovered": 2, 00:19:47.062 "num_base_bdevs_operational": 2, 00:19:47.062 "process": { 00:19:47.062 "type": "rebuild", 00:19:47.062 "target": "spare", 00:19:47.062 "progress": { 00:19:47.062 "blocks": 2560, 00:19:47.062 "percent": 32 00:19:47.062 } 00:19:47.062 }, 00:19:47.062 "base_bdevs_list": [ 00:19:47.062 { 00:19:47.062 "name": "spare", 00:19:47.062 "uuid": "b528dca3-7e85-5d9a-8ab9-8c68ca526c45", 00:19:47.062 "is_configured": true, 00:19:47.062 "data_offset": 256, 00:19:47.062 "data_size": 7936 00:19:47.062 }, 00:19:47.062 { 00:19:47.062 "name": "BaseBdev2", 00:19:47.062 "uuid": "c8eedcfb-a6f6-54e9-bf29-1ea3c298e4d6", 00:19:47.062 "is_configured": true, 00:19:47.062 "data_offset": 256, 00:19:47.062 "data_size": 7936 00:19:47.062 } 00:19:47.062 ] 00:19:47.062 }' 00:19:47.062 14:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.062 14:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:47.062 14:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.321 14:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:47.321 14:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:47.321 14:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:47.321 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:47.321 14:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:47.321 14:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:47.321 14:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:47.321 14:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=733 00:19:47.321 14:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:47.321 14:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:47.321 14:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.321 14:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:47.321 14:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:47.321 14:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.321 14:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.321 14:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.321 14:30:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.321 14:30:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:47.321 14:30:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.321 14:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.321 "name": "raid_bdev1", 00:19:47.321 "uuid": "7c29d69d-c669-4008-978b-0eee64735c4c", 00:19:47.321 "strip_size_kb": 0, 00:19:47.321 "state": "online", 00:19:47.321 "raid_level": "raid1", 00:19:47.321 "superblock": true, 00:19:47.321 "num_base_bdevs": 2, 00:19:47.321 "num_base_bdevs_discovered": 2, 00:19:47.321 "num_base_bdevs_operational": 2, 00:19:47.321 "process": { 00:19:47.321 "type": "rebuild", 00:19:47.321 "target": "spare", 00:19:47.321 "progress": { 00:19:47.321 "blocks": 2816, 00:19:47.321 "percent": 35 00:19:47.321 } 00:19:47.321 }, 00:19:47.321 "base_bdevs_list": [ 00:19:47.321 { 00:19:47.321 "name": "spare", 00:19:47.321 "uuid": "b528dca3-7e85-5d9a-8ab9-8c68ca526c45", 00:19:47.321 "is_configured": true, 00:19:47.321 "data_offset": 256, 00:19:47.321 "data_size": 7936 00:19:47.321 }, 00:19:47.321 { 00:19:47.321 "name": "BaseBdev2", 00:19:47.321 "uuid": "c8eedcfb-a6f6-54e9-bf29-1ea3c298e4d6", 00:19:47.321 "is_configured": true, 00:19:47.321 "data_offset": 256, 00:19:47.321 "data_size": 7936 00:19:47.321 } 00:19:47.321 ] 00:19:47.321 }' 00:19:47.321 14:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.321 14:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:47.321 14:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.321 14:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:47.321 14:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:48.256 14:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:48.256 14:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:48.256 14:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:48.256 14:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:48.256 14:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:48.256 14:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:48.256 14:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.256 14:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.256 14:30:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.256 14:30:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:48.256 14:30:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.515 14:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:48.515 "name": "raid_bdev1", 00:19:48.515 "uuid": "7c29d69d-c669-4008-978b-0eee64735c4c", 00:19:48.515 "strip_size_kb": 0, 00:19:48.515 "state": "online", 00:19:48.515 "raid_level": "raid1", 00:19:48.515 "superblock": true, 00:19:48.515 "num_base_bdevs": 2, 00:19:48.515 "num_base_bdevs_discovered": 2, 00:19:48.515 "num_base_bdevs_operational": 2, 00:19:48.516 "process": { 00:19:48.516 "type": "rebuild", 00:19:48.516 "target": "spare", 00:19:48.516 "progress": { 00:19:48.516 "blocks": 5888, 00:19:48.516 "percent": 74 00:19:48.516 } 00:19:48.516 }, 00:19:48.516 "base_bdevs_list": [ 00:19:48.516 { 00:19:48.516 "name": "spare", 00:19:48.516 "uuid": "b528dca3-7e85-5d9a-8ab9-8c68ca526c45", 00:19:48.516 "is_configured": true, 00:19:48.516 "data_offset": 256, 00:19:48.516 "data_size": 7936 00:19:48.516 }, 00:19:48.516 { 00:19:48.516 "name": "BaseBdev2", 00:19:48.516 "uuid": "c8eedcfb-a6f6-54e9-bf29-1ea3c298e4d6", 00:19:48.516 "is_configured": true, 00:19:48.516 "data_offset": 256, 00:19:48.516 "data_size": 7936 00:19:48.516 } 00:19:48.516 ] 00:19:48.516 }' 00:19:48.516 14:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:48.516 14:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:48.516 14:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:48.516 14:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:48.516 14:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:49.083 [2024-11-20 14:30:28.000385] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:49.083 [2024-11-20 14:30:28.000488] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:49.083 [2024-11-20 14:30:28.000653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.649 "name": "raid_bdev1", 00:19:49.649 "uuid": "7c29d69d-c669-4008-978b-0eee64735c4c", 00:19:49.649 "strip_size_kb": 0, 00:19:49.649 "state": "online", 00:19:49.649 "raid_level": "raid1", 00:19:49.649 "superblock": true, 00:19:49.649 "num_base_bdevs": 2, 00:19:49.649 "num_base_bdevs_discovered": 2, 00:19:49.649 "num_base_bdevs_operational": 2, 00:19:49.649 "base_bdevs_list": [ 00:19:49.649 { 00:19:49.649 "name": "spare", 00:19:49.649 "uuid": "b528dca3-7e85-5d9a-8ab9-8c68ca526c45", 00:19:49.649 "is_configured": true, 00:19:49.649 "data_offset": 256, 00:19:49.649 "data_size": 7936 00:19:49.649 }, 00:19:49.649 { 00:19:49.649 "name": "BaseBdev2", 00:19:49.649 "uuid": "c8eedcfb-a6f6-54e9-bf29-1ea3c298e4d6", 00:19:49.649 "is_configured": true, 00:19:49.649 "data_offset": 256, 00:19:49.649 "data_size": 7936 00:19:49.649 } 00:19:49.649 ] 00:19:49.649 }' 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.649 "name": "raid_bdev1", 00:19:49.649 "uuid": "7c29d69d-c669-4008-978b-0eee64735c4c", 00:19:49.649 "strip_size_kb": 0, 00:19:49.649 "state": "online", 00:19:49.649 "raid_level": "raid1", 00:19:49.649 "superblock": true, 00:19:49.649 "num_base_bdevs": 2, 00:19:49.649 "num_base_bdevs_discovered": 2, 00:19:49.649 "num_base_bdevs_operational": 2, 00:19:49.649 "base_bdevs_list": [ 00:19:49.649 { 00:19:49.649 "name": "spare", 00:19:49.649 "uuid": "b528dca3-7e85-5d9a-8ab9-8c68ca526c45", 00:19:49.649 "is_configured": true, 00:19:49.649 "data_offset": 256, 00:19:49.649 "data_size": 7936 00:19:49.649 }, 00:19:49.649 { 00:19:49.649 "name": "BaseBdev2", 00:19:49.649 "uuid": "c8eedcfb-a6f6-54e9-bf29-1ea3c298e4d6", 00:19:49.649 "is_configured": true, 00:19:49.649 "data_offset": 256, 00:19:49.649 "data_size": 7936 00:19:49.649 } 00:19:49.649 ] 00:19:49.649 }' 00:19:49.649 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.907 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:49.907 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.907 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:49.907 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:49.907 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:49.908 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:49.908 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:49.908 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:49.908 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:49.908 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.908 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.908 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.908 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.908 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.908 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.908 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.908 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:49.908 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.908 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.908 "name": "raid_bdev1", 00:19:49.908 "uuid": "7c29d69d-c669-4008-978b-0eee64735c4c", 00:19:49.908 "strip_size_kb": 0, 00:19:49.908 "state": "online", 00:19:49.908 "raid_level": "raid1", 00:19:49.908 "superblock": true, 00:19:49.908 "num_base_bdevs": 2, 00:19:49.908 "num_base_bdevs_discovered": 2, 00:19:49.908 "num_base_bdevs_operational": 2, 00:19:49.908 "base_bdevs_list": [ 00:19:49.908 { 00:19:49.908 "name": "spare", 00:19:49.908 "uuid": "b528dca3-7e85-5d9a-8ab9-8c68ca526c45", 00:19:49.908 "is_configured": true, 00:19:49.908 "data_offset": 256, 00:19:49.908 "data_size": 7936 00:19:49.908 }, 00:19:49.908 { 00:19:49.908 "name": "BaseBdev2", 00:19:49.908 "uuid": "c8eedcfb-a6f6-54e9-bf29-1ea3c298e4d6", 00:19:49.908 "is_configured": true, 00:19:49.908 "data_offset": 256, 00:19:49.908 "data_size": 7936 00:19:49.908 } 00:19:49.908 ] 00:19:49.908 }' 00:19:49.908 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.908 14:30:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:50.549 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:50.549 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.549 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:50.550 [2024-11-20 14:30:29.228696] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:50.550 [2024-11-20 14:30:29.230104] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:50.550 [2024-11-20 14:30:29.230237] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:50.550 [2024-11-20 14:30:29.230330] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:50.550 [2024-11-20 14:30:29.230352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:50.550 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.550 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.550 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.550 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:50.550 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:19:50.550 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.550 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:50.550 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:50.550 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:50.550 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:50.550 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:50.550 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:50.550 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:50.550 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:50.550 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:50.550 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:50.550 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:50.550 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:50.550 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:50.808 /dev/nbd0 00:19:50.808 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:50.808 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:50.808 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:50.808 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:50.808 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:50.808 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:50.808 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:50.808 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:50.808 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:50.808 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:50.808 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:50.808 1+0 records in 00:19:50.808 1+0 records out 00:19:50.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398821 s, 10.3 MB/s 00:19:50.808 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:50.808 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:50.808 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:50.808 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:50.808 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:50.808 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:50.808 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:50.808 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:51.067 /dev/nbd1 00:19:51.067 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:51.067 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:51.067 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:51.067 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:51.067 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:51.067 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:51.067 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:51.067 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:51.067 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:51.067 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:51.067 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:51.067 1+0 records in 00:19:51.067 1+0 records out 00:19:51.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408412 s, 10.0 MB/s 00:19:51.067 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:51.067 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:51.067 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:51.067 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:51.067 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:51.067 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:51.067 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:51.067 14:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:51.325 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:51.325 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:51.325 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:51.325 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:51.325 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:51.325 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:51.325 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:51.584 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:51.585 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:51.585 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:51.585 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:51.585 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:51.585 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:51.585 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:51.585 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:51.585 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:51.585 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:51.843 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:51.843 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:51.844 [2024-11-20 14:30:30.669620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:51.844 [2024-11-20 14:30:30.669702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:51.844 [2024-11-20 14:30:30.669742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:51.844 [2024-11-20 14:30:30.669758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:51.844 [2024-11-20 14:30:30.672731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:51.844 [2024-11-20 14:30:30.672781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:51.844 [2024-11-20 14:30:30.672912] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:51.844 [2024-11-20 14:30:30.673018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:51.844 [2024-11-20 14:30:30.673285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:51.844 spare 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:51.844 [2024-11-20 14:30:30.773443] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:51.844 [2024-11-20 14:30:30.773524] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:51.844 [2024-11-20 14:30:30.773969] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:51.844 [2024-11-20 14:30:30.774292] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:51.844 [2024-11-20 14:30:30.774311] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:51.844 [2024-11-20 14:30:30.774575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:51.844 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.103 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.103 "name": "raid_bdev1", 00:19:52.103 "uuid": "7c29d69d-c669-4008-978b-0eee64735c4c", 00:19:52.103 "strip_size_kb": 0, 00:19:52.103 "state": "online", 00:19:52.103 "raid_level": "raid1", 00:19:52.103 "superblock": true, 00:19:52.103 "num_base_bdevs": 2, 00:19:52.103 "num_base_bdevs_discovered": 2, 00:19:52.103 "num_base_bdevs_operational": 2, 00:19:52.103 "base_bdevs_list": [ 00:19:52.103 { 00:19:52.103 "name": "spare", 00:19:52.103 "uuid": "b528dca3-7e85-5d9a-8ab9-8c68ca526c45", 00:19:52.103 "is_configured": true, 00:19:52.103 "data_offset": 256, 00:19:52.103 "data_size": 7936 00:19:52.103 }, 00:19:52.103 { 00:19:52.103 "name": "BaseBdev2", 00:19:52.103 "uuid": "c8eedcfb-a6f6-54e9-bf29-1ea3c298e4d6", 00:19:52.103 "is_configured": true, 00:19:52.103 "data_offset": 256, 00:19:52.103 "data_size": 7936 00:19:52.103 } 00:19:52.103 ] 00:19:52.103 }' 00:19:52.103 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.103 14:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.362 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:52.362 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.362 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:52.362 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:52.362 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.362 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.362 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.362 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.362 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.362 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.620 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.620 "name": "raid_bdev1", 00:19:52.620 "uuid": "7c29d69d-c669-4008-978b-0eee64735c4c", 00:19:52.620 "strip_size_kb": 0, 00:19:52.620 "state": "online", 00:19:52.620 "raid_level": "raid1", 00:19:52.620 "superblock": true, 00:19:52.620 "num_base_bdevs": 2, 00:19:52.620 "num_base_bdevs_discovered": 2, 00:19:52.620 "num_base_bdevs_operational": 2, 00:19:52.620 "base_bdevs_list": [ 00:19:52.621 { 00:19:52.621 "name": "spare", 00:19:52.621 "uuid": "b528dca3-7e85-5d9a-8ab9-8c68ca526c45", 00:19:52.621 "is_configured": true, 00:19:52.621 "data_offset": 256, 00:19:52.621 "data_size": 7936 00:19:52.621 }, 00:19:52.621 { 00:19:52.621 "name": "BaseBdev2", 00:19:52.621 "uuid": "c8eedcfb-a6f6-54e9-bf29-1ea3c298e4d6", 00:19:52.621 "is_configured": true, 00:19:52.621 "data_offset": 256, 00:19:52.621 "data_size": 7936 00:19:52.621 } 00:19:52.621 ] 00:19:52.621 }' 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.621 [2024-11-20 14:30:31.522764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.621 "name": "raid_bdev1", 00:19:52.621 "uuid": "7c29d69d-c669-4008-978b-0eee64735c4c", 00:19:52.621 "strip_size_kb": 0, 00:19:52.621 "state": "online", 00:19:52.621 "raid_level": "raid1", 00:19:52.621 "superblock": true, 00:19:52.621 "num_base_bdevs": 2, 00:19:52.621 "num_base_bdevs_discovered": 1, 00:19:52.621 "num_base_bdevs_operational": 1, 00:19:52.621 "base_bdevs_list": [ 00:19:52.621 { 00:19:52.621 "name": null, 00:19:52.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.621 "is_configured": false, 00:19:52.621 "data_offset": 0, 00:19:52.621 "data_size": 7936 00:19:52.621 }, 00:19:52.621 { 00:19:52.621 "name": "BaseBdev2", 00:19:52.621 "uuid": "c8eedcfb-a6f6-54e9-bf29-1ea3c298e4d6", 00:19:52.621 "is_configured": true, 00:19:52.621 "data_offset": 256, 00:19:52.621 "data_size": 7936 00:19:52.621 } 00:19:52.621 ] 00:19:52.621 }' 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.621 14:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:53.188 14:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:53.188 14:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.188 14:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:53.188 [2024-11-20 14:30:32.006899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:53.188 [2024-11-20 14:30:32.007331] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:53.188 [2024-11-20 14:30:32.007386] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:53.188 [2024-11-20 14:30:32.007440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:53.188 [2024-11-20 14:30:32.023403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:53.188 14:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.188 14:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:53.188 [2024-11-20 14:30:32.026125] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:54.122 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.122 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.122 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:54.122 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:54.122 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.122 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.122 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.122 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.122 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:54.122 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.122 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.122 "name": "raid_bdev1", 00:19:54.122 "uuid": "7c29d69d-c669-4008-978b-0eee64735c4c", 00:19:54.122 "strip_size_kb": 0, 00:19:54.122 "state": "online", 00:19:54.122 "raid_level": "raid1", 00:19:54.122 "superblock": true, 00:19:54.122 "num_base_bdevs": 2, 00:19:54.122 "num_base_bdevs_discovered": 2, 00:19:54.122 "num_base_bdevs_operational": 2, 00:19:54.122 "process": { 00:19:54.122 "type": "rebuild", 00:19:54.122 "target": "spare", 00:19:54.122 "progress": { 00:19:54.122 "blocks": 2560, 00:19:54.122 "percent": 32 00:19:54.122 } 00:19:54.122 }, 00:19:54.122 "base_bdevs_list": [ 00:19:54.122 { 00:19:54.122 "name": "spare", 00:19:54.122 "uuid": "b528dca3-7e85-5d9a-8ab9-8c68ca526c45", 00:19:54.122 "is_configured": true, 00:19:54.122 "data_offset": 256, 00:19:54.122 "data_size": 7936 00:19:54.122 }, 00:19:54.122 { 00:19:54.122 "name": "BaseBdev2", 00:19:54.123 "uuid": "c8eedcfb-a6f6-54e9-bf29-1ea3c298e4d6", 00:19:54.123 "is_configured": true, 00:19:54.123 "data_offset": 256, 00:19:54.123 "data_size": 7936 00:19:54.123 } 00:19:54.123 ] 00:19:54.123 }' 00:19:54.123 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:54.381 [2024-11-20 14:30:33.195559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:54.381 [2024-11-20 14:30:33.235492] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:54.381 [2024-11-20 14:30:33.235907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.381 [2024-11-20 14:30:33.235937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:54.381 [2024-11-20 14:30:33.235954] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.381 "name": "raid_bdev1", 00:19:54.381 "uuid": "7c29d69d-c669-4008-978b-0eee64735c4c", 00:19:54.381 "strip_size_kb": 0, 00:19:54.381 "state": "online", 00:19:54.381 "raid_level": "raid1", 00:19:54.381 "superblock": true, 00:19:54.381 "num_base_bdevs": 2, 00:19:54.381 "num_base_bdevs_discovered": 1, 00:19:54.381 "num_base_bdevs_operational": 1, 00:19:54.381 "base_bdevs_list": [ 00:19:54.381 { 00:19:54.381 "name": null, 00:19:54.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.381 "is_configured": false, 00:19:54.381 "data_offset": 0, 00:19:54.381 "data_size": 7936 00:19:54.381 }, 00:19:54.381 { 00:19:54.381 "name": "BaseBdev2", 00:19:54.381 "uuid": "c8eedcfb-a6f6-54e9-bf29-1ea3c298e4d6", 00:19:54.381 "is_configured": true, 00:19:54.381 "data_offset": 256, 00:19:54.381 "data_size": 7936 00:19:54.381 } 00:19:54.381 ] 00:19:54.381 }' 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.381 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:54.949 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:54.949 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.949 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:54.949 [2024-11-20 14:30:33.776193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:54.949 [2024-11-20 14:30:33.776413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.949 [2024-11-20 14:30:33.776492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:54.949 [2024-11-20 14:30:33.776518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.949 [2024-11-20 14:30:33.777144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.949 [2024-11-20 14:30:33.777189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:54.950 [2024-11-20 14:30:33.777311] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:54.950 [2024-11-20 14:30:33.777335] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:54.950 [2024-11-20 14:30:33.777354] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:54.950 [2024-11-20 14:30:33.777385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:54.950 [2024-11-20 14:30:33.793253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:54.950 spare 00:19:54.950 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.950 14:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:54.950 [2024-11-20 14:30:33.795736] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:55.884 14:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:55.884 14:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:55.884 14:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:55.884 14:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:55.884 14:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:55.884 14:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.884 14:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.884 14:30:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.884 14:30:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:55.884 14:30:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.884 14:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:55.884 "name": "raid_bdev1", 00:19:55.884 "uuid": "7c29d69d-c669-4008-978b-0eee64735c4c", 00:19:55.884 "strip_size_kb": 0, 00:19:55.884 "state": "online", 00:19:55.884 "raid_level": "raid1", 00:19:55.884 "superblock": true, 00:19:55.884 "num_base_bdevs": 2, 00:19:55.884 "num_base_bdevs_discovered": 2, 00:19:55.884 "num_base_bdevs_operational": 2, 00:19:55.884 "process": { 00:19:55.884 "type": "rebuild", 00:19:55.884 "target": "spare", 00:19:55.884 "progress": { 00:19:55.884 "blocks": 2560, 00:19:55.884 "percent": 32 00:19:55.884 } 00:19:55.884 }, 00:19:55.884 "base_bdevs_list": [ 00:19:55.884 { 00:19:55.884 "name": "spare", 00:19:55.884 "uuid": "b528dca3-7e85-5d9a-8ab9-8c68ca526c45", 00:19:55.884 "is_configured": true, 00:19:55.884 "data_offset": 256, 00:19:55.884 "data_size": 7936 00:19:55.884 }, 00:19:55.884 { 00:19:55.884 "name": "BaseBdev2", 00:19:55.884 "uuid": "c8eedcfb-a6f6-54e9-bf29-1ea3c298e4d6", 00:19:55.884 "is_configured": true, 00:19:55.884 "data_offset": 256, 00:19:55.884 "data_size": 7936 00:19:55.884 } 00:19:55.884 ] 00:19:55.884 }' 00:19:55.884 14:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.143 14:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:56.143 14:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.143 14:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:56.143 14:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:56.143 14:30:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.143 14:30:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.143 [2024-11-20 14:30:34.965294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:56.143 [2024-11-20 14:30:35.004560] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:56.143 [2024-11-20 14:30:35.004674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.143 [2024-11-20 14:30:35.004703] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:56.143 [2024-11-20 14:30:35.004715] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:56.143 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.143 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:56.143 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.143 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:56.143 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.143 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.143 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:56.143 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.143 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.143 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.143 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.143 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.143 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.143 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.143 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.143 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.143 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.143 "name": "raid_bdev1", 00:19:56.143 "uuid": "7c29d69d-c669-4008-978b-0eee64735c4c", 00:19:56.143 "strip_size_kb": 0, 00:19:56.143 "state": "online", 00:19:56.143 "raid_level": "raid1", 00:19:56.143 "superblock": true, 00:19:56.143 "num_base_bdevs": 2, 00:19:56.143 "num_base_bdevs_discovered": 1, 00:19:56.143 "num_base_bdevs_operational": 1, 00:19:56.143 "base_bdevs_list": [ 00:19:56.143 { 00:19:56.143 "name": null, 00:19:56.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.143 "is_configured": false, 00:19:56.143 "data_offset": 0, 00:19:56.143 "data_size": 7936 00:19:56.143 }, 00:19:56.143 { 00:19:56.143 "name": "BaseBdev2", 00:19:56.143 "uuid": "c8eedcfb-a6f6-54e9-bf29-1ea3c298e4d6", 00:19:56.143 "is_configured": true, 00:19:56.143 "data_offset": 256, 00:19:56.143 "data_size": 7936 00:19:56.143 } 00:19:56.143 ] 00:19:56.143 }' 00:19:56.143 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.143 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.709 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:56.709 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:56.709 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:56.709 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:56.709 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:56.709 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.709 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.709 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.709 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.709 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.709 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:56.709 "name": "raid_bdev1", 00:19:56.709 "uuid": "7c29d69d-c669-4008-978b-0eee64735c4c", 00:19:56.709 "strip_size_kb": 0, 00:19:56.709 "state": "online", 00:19:56.709 "raid_level": "raid1", 00:19:56.709 "superblock": true, 00:19:56.709 "num_base_bdevs": 2, 00:19:56.709 "num_base_bdevs_discovered": 1, 00:19:56.709 "num_base_bdevs_operational": 1, 00:19:56.709 "base_bdevs_list": [ 00:19:56.709 { 00:19:56.709 "name": null, 00:19:56.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.709 "is_configured": false, 00:19:56.709 "data_offset": 0, 00:19:56.709 "data_size": 7936 00:19:56.709 }, 00:19:56.709 { 00:19:56.709 "name": "BaseBdev2", 00:19:56.709 "uuid": "c8eedcfb-a6f6-54e9-bf29-1ea3c298e4d6", 00:19:56.709 "is_configured": true, 00:19:56.709 "data_offset": 256, 00:19:56.709 "data_size": 7936 00:19:56.709 } 00:19:56.709 ] 00:19:56.709 }' 00:19:56.709 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.709 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:56.709 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.970 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:56.970 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:56.970 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.970 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.970 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.970 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:56.970 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.970 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.970 [2024-11-20 14:30:35.724562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:56.970 [2024-11-20 14:30:35.724649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.970 [2024-11-20 14:30:35.724693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:56.970 [2024-11-20 14:30:35.724723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.970 [2024-11-20 14:30:35.725385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.970 [2024-11-20 14:30:35.725427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:56.970 [2024-11-20 14:30:35.725571] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:56.970 [2024-11-20 14:30:35.725598] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:56.970 [2024-11-20 14:30:35.725620] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:56.970 [2024-11-20 14:30:35.725638] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:56.970 BaseBdev1 00:19:56.970 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.970 14:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:57.904 14:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:57.904 14:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.904 14:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.904 14:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.904 14:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.904 14:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:57.904 14:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.904 14:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.904 14:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.904 14:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.904 14:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.904 14:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.904 14:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.904 14:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.904 14:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.904 14:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.904 "name": "raid_bdev1", 00:19:57.904 "uuid": "7c29d69d-c669-4008-978b-0eee64735c4c", 00:19:57.904 "strip_size_kb": 0, 00:19:57.904 "state": "online", 00:19:57.904 "raid_level": "raid1", 00:19:57.904 "superblock": true, 00:19:57.904 "num_base_bdevs": 2, 00:19:57.904 "num_base_bdevs_discovered": 1, 00:19:57.904 "num_base_bdevs_operational": 1, 00:19:57.904 "base_bdevs_list": [ 00:19:57.904 { 00:19:57.904 "name": null, 00:19:57.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.904 "is_configured": false, 00:19:57.904 "data_offset": 0, 00:19:57.904 "data_size": 7936 00:19:57.904 }, 00:19:57.904 { 00:19:57.904 "name": "BaseBdev2", 00:19:57.904 "uuid": "c8eedcfb-a6f6-54e9-bf29-1ea3c298e4d6", 00:19:57.904 "is_configured": true, 00:19:57.904 "data_offset": 256, 00:19:57.904 "data_size": 7936 00:19:57.904 } 00:19:57.904 ] 00:19:57.904 }' 00:19:57.904 14:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.904 14:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.469 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:58.469 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.469 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:58.469 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:58.469 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.469 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.469 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.469 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.469 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.469 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.469 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.469 "name": "raid_bdev1", 00:19:58.469 "uuid": "7c29d69d-c669-4008-978b-0eee64735c4c", 00:19:58.469 "strip_size_kb": 0, 00:19:58.469 "state": "online", 00:19:58.469 "raid_level": "raid1", 00:19:58.469 "superblock": true, 00:19:58.469 "num_base_bdevs": 2, 00:19:58.469 "num_base_bdevs_discovered": 1, 00:19:58.469 "num_base_bdevs_operational": 1, 00:19:58.469 "base_bdevs_list": [ 00:19:58.469 { 00:19:58.469 "name": null, 00:19:58.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.469 "is_configured": false, 00:19:58.470 "data_offset": 0, 00:19:58.470 "data_size": 7936 00:19:58.470 }, 00:19:58.470 { 00:19:58.470 "name": "BaseBdev2", 00:19:58.470 "uuid": "c8eedcfb-a6f6-54e9-bf29-1ea3c298e4d6", 00:19:58.470 "is_configured": true, 00:19:58.470 "data_offset": 256, 00:19:58.470 "data_size": 7936 00:19:58.470 } 00:19:58.470 ] 00:19:58.470 }' 00:19:58.470 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.470 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:58.470 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.470 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:58.470 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:58.470 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:19:58.470 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:58.470 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:58.470 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:58.470 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:58.470 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:58.470 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:58.470 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.470 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.470 [2024-11-20 14:30:37.377172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:58.470 [2024-11-20 14:30:37.377400] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:58.470 [2024-11-20 14:30:37.377426] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:58.470 request: 00:19:58.470 { 00:19:58.470 "base_bdev": "BaseBdev1", 00:19:58.470 "raid_bdev": "raid_bdev1", 00:19:58.470 "method": "bdev_raid_add_base_bdev", 00:19:58.470 "req_id": 1 00:19:58.470 } 00:19:58.470 Got JSON-RPC error response 00:19:58.470 response: 00:19:58.470 { 00:19:58.470 "code": -22, 00:19:58.470 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:58.470 } 00:19:58.470 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:58.470 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:19:58.470 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:58.470 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:58.470 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:58.470 14:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:59.409 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:59.409 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:59.667 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:59.667 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:59.667 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:59.668 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:59.668 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.668 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.668 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.668 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.668 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.668 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.668 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.668 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:59.668 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.668 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.668 "name": "raid_bdev1", 00:19:59.668 "uuid": "7c29d69d-c669-4008-978b-0eee64735c4c", 00:19:59.668 "strip_size_kb": 0, 00:19:59.668 "state": "online", 00:19:59.668 "raid_level": "raid1", 00:19:59.668 "superblock": true, 00:19:59.668 "num_base_bdevs": 2, 00:19:59.668 "num_base_bdevs_discovered": 1, 00:19:59.668 "num_base_bdevs_operational": 1, 00:19:59.668 "base_bdevs_list": [ 00:19:59.668 { 00:19:59.668 "name": null, 00:19:59.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.668 "is_configured": false, 00:19:59.668 "data_offset": 0, 00:19:59.668 "data_size": 7936 00:19:59.668 }, 00:19:59.668 { 00:19:59.668 "name": "BaseBdev2", 00:19:59.668 "uuid": "c8eedcfb-a6f6-54e9-bf29-1ea3c298e4d6", 00:19:59.668 "is_configured": true, 00:19:59.668 "data_offset": 256, 00:19:59.668 "data_size": 7936 00:19:59.668 } 00:19:59.668 ] 00:19:59.668 }' 00:19:59.668 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.668 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:59.926 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:59.926 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:59.926 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:59.926 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:59.926 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:59.926 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.926 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.926 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.926 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:59.926 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.926 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:59.926 "name": "raid_bdev1", 00:19:59.926 "uuid": "7c29d69d-c669-4008-978b-0eee64735c4c", 00:19:59.926 "strip_size_kb": 0, 00:19:59.926 "state": "online", 00:19:59.926 "raid_level": "raid1", 00:19:59.926 "superblock": true, 00:19:59.926 "num_base_bdevs": 2, 00:19:59.926 "num_base_bdevs_discovered": 1, 00:19:59.926 "num_base_bdevs_operational": 1, 00:19:59.926 "base_bdevs_list": [ 00:19:59.926 { 00:19:59.926 "name": null, 00:19:59.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.926 "is_configured": false, 00:19:59.926 "data_offset": 0, 00:19:59.926 "data_size": 7936 00:19:59.926 }, 00:19:59.926 { 00:19:59.926 "name": "BaseBdev2", 00:19:59.926 "uuid": "c8eedcfb-a6f6-54e9-bf29-1ea3c298e4d6", 00:19:59.926 "is_configured": true, 00:19:59.926 "data_offset": 256, 00:19:59.926 "data_size": 7936 00:19:59.926 } 00:19:59.926 ] 00:19:59.926 }' 00:19:59.926 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.186 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:00.186 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.186 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:00.186 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86964 00:20:00.186 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86964 ']' 00:20:00.186 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86964 00:20:00.186 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:20:00.186 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.186 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86964 00:20:00.186 killing process with pid 86964 00:20:00.186 Received shutdown signal, test time was about 60.000000 seconds 00:20:00.186 00:20:00.186 Latency(us) 00:20:00.186 [2024-11-20T14:30:39.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.186 [2024-11-20T14:30:39.168Z] =================================================================================================================== 00:20:00.186 [2024-11-20T14:30:39.168Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:00.186 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:00.186 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:00.186 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86964' 00:20:00.186 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86964 00:20:00.186 [2024-11-20 14:30:38.983967] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:00.186 14:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86964 00:20:00.186 [2024-11-20 14:30:38.984144] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:00.186 [2024-11-20 14:30:38.984218] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:00.186 [2024-11-20 14:30:38.984238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:00.474 [2024-11-20 14:30:39.258383] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:01.408 14:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:20:01.408 00:20:01.408 real 0m21.606s 00:20:01.409 user 0m29.045s 00:20:01.409 sys 0m2.556s 00:20:01.409 ************************************ 00:20:01.409 END TEST raid_rebuild_test_sb_4k 00:20:01.409 ************************************ 00:20:01.409 14:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.409 14:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:01.409 14:30:40 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:20:01.409 14:30:40 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:20:01.409 14:30:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:01.409 14:30:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.409 14:30:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:01.409 ************************************ 00:20:01.409 START TEST raid_state_function_test_sb_md_separate 00:20:01.409 ************************************ 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:01.409 Process raid pid: 87675 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87675 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87675' 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87675 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87675 ']' 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.409 14:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:01.667 [2024-11-20 14:30:40.470541] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:20:01.667 [2024-11-20 14:30:40.471012] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.925 [2024-11-20 14:30:40.657591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.925 [2024-11-20 14:30:40.817347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.183 [2024-11-20 14:30:41.025748] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:02.183 [2024-11-20 14:30:41.026091] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:02.748 14:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.748 14:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:20:02.748 14:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:02.748 14:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.748 14:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:02.748 [2024-11-20 14:30:41.570387] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:02.748 [2024-11-20 14:30:41.570463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:02.748 [2024-11-20 14:30:41.570480] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:02.748 [2024-11-20 14:30:41.570498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:02.748 14:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.748 14:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:02.748 14:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:02.748 14:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:02.748 14:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:02.748 14:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:02.748 14:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:02.748 14:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.748 14:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.748 14:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.748 14:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.748 14:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.748 14:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.748 14:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.748 14:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:02.748 14:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.748 14:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.748 "name": "Existed_Raid", 00:20:02.748 "uuid": "bec54a04-248f-41ec-b616-0d9af92b0b90", 00:20:02.748 "strip_size_kb": 0, 00:20:02.748 "state": "configuring", 00:20:02.748 "raid_level": "raid1", 00:20:02.748 "superblock": true, 00:20:02.748 "num_base_bdevs": 2, 00:20:02.748 "num_base_bdevs_discovered": 0, 00:20:02.748 "num_base_bdevs_operational": 2, 00:20:02.749 "base_bdevs_list": [ 00:20:02.749 { 00:20:02.749 "name": "BaseBdev1", 00:20:02.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.749 "is_configured": false, 00:20:02.749 "data_offset": 0, 00:20:02.749 "data_size": 0 00:20:02.749 }, 00:20:02.749 { 00:20:02.749 "name": "BaseBdev2", 00:20:02.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.749 "is_configured": false, 00:20:02.749 "data_offset": 0, 00:20:02.749 "data_size": 0 00:20:02.749 } 00:20:02.749 ] 00:20:02.749 }' 00:20:02.749 14:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.749 14:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.399 [2024-11-20 14:30:42.090727] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:03.399 [2024-11-20 14:30:42.090792] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.399 [2024-11-20 14:30:42.098702] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:03.399 [2024-11-20 14:30:42.098759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:03.399 [2024-11-20 14:30:42.098774] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:03.399 [2024-11-20 14:30:42.098794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.399 [2024-11-20 14:30:42.149065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:03.399 BaseBdev1 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.399 [ 00:20:03.399 { 00:20:03.399 "name": "BaseBdev1", 00:20:03.399 "aliases": [ 00:20:03.399 "057954c7-1f84-4e52-b2a9-293cfd1a368f" 00:20:03.399 ], 00:20:03.399 "product_name": "Malloc disk", 00:20:03.399 "block_size": 4096, 00:20:03.399 "num_blocks": 8192, 00:20:03.399 "uuid": "057954c7-1f84-4e52-b2a9-293cfd1a368f", 00:20:03.399 "md_size": 32, 00:20:03.399 "md_interleave": false, 00:20:03.399 "dif_type": 0, 00:20:03.399 "assigned_rate_limits": { 00:20:03.399 "rw_ios_per_sec": 0, 00:20:03.399 "rw_mbytes_per_sec": 0, 00:20:03.399 "r_mbytes_per_sec": 0, 00:20:03.399 "w_mbytes_per_sec": 0 00:20:03.399 }, 00:20:03.399 "claimed": true, 00:20:03.399 "claim_type": "exclusive_write", 00:20:03.399 "zoned": false, 00:20:03.399 "supported_io_types": { 00:20:03.399 "read": true, 00:20:03.399 "write": true, 00:20:03.399 "unmap": true, 00:20:03.399 "flush": true, 00:20:03.399 "reset": true, 00:20:03.399 "nvme_admin": false, 00:20:03.399 "nvme_io": false, 00:20:03.399 "nvme_io_md": false, 00:20:03.399 "write_zeroes": true, 00:20:03.399 "zcopy": true, 00:20:03.399 "get_zone_info": false, 00:20:03.399 "zone_management": false, 00:20:03.399 "zone_append": false, 00:20:03.399 "compare": false, 00:20:03.399 "compare_and_write": false, 00:20:03.399 "abort": true, 00:20:03.399 "seek_hole": false, 00:20:03.399 "seek_data": false, 00:20:03.399 "copy": true, 00:20:03.399 "nvme_iov_md": false 00:20:03.399 }, 00:20:03.399 "memory_domains": [ 00:20:03.399 { 00:20:03.399 "dma_device_id": "system", 00:20:03.399 "dma_device_type": 1 00:20:03.399 }, 00:20:03.399 { 00:20:03.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.399 "dma_device_type": 2 00:20:03.399 } 00:20:03.399 ], 00:20:03.399 "driver_specific": {} 00:20:03.399 } 00:20:03.399 ] 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:03.399 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:03.400 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.400 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.400 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.400 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.400 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:03.400 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.400 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.400 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.400 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.400 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.400 "name": "Existed_Raid", 00:20:03.400 "uuid": "1c701dc0-53cc-4ffe-be07-0d23d91b9b4c", 00:20:03.400 "strip_size_kb": 0, 00:20:03.400 "state": "configuring", 00:20:03.400 "raid_level": "raid1", 00:20:03.400 "superblock": true, 00:20:03.400 "num_base_bdevs": 2, 00:20:03.400 "num_base_bdevs_discovered": 1, 00:20:03.400 "num_base_bdevs_operational": 2, 00:20:03.400 "base_bdevs_list": [ 00:20:03.400 { 00:20:03.400 "name": "BaseBdev1", 00:20:03.400 "uuid": "057954c7-1f84-4e52-b2a9-293cfd1a368f", 00:20:03.400 "is_configured": true, 00:20:03.400 "data_offset": 256, 00:20:03.400 "data_size": 7936 00:20:03.400 }, 00:20:03.400 { 00:20:03.400 "name": "BaseBdev2", 00:20:03.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.400 "is_configured": false, 00:20:03.400 "data_offset": 0, 00:20:03.400 "data_size": 0 00:20:03.400 } 00:20:03.400 ] 00:20:03.400 }' 00:20:03.400 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.400 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.967 [2024-11-20 14:30:42.645362] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:03.967 [2024-11-20 14:30:42.645439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.967 [2024-11-20 14:30:42.653437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:03.967 [2024-11-20 14:30:42.656546] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:03.967 [2024-11-20 14:30:42.656786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.967 "name": "Existed_Raid", 00:20:03.967 "uuid": "c3edda91-5795-40c7-b6f2-55ca8ee2340a", 00:20:03.967 "strip_size_kb": 0, 00:20:03.967 "state": "configuring", 00:20:03.967 "raid_level": "raid1", 00:20:03.967 "superblock": true, 00:20:03.967 "num_base_bdevs": 2, 00:20:03.967 "num_base_bdevs_discovered": 1, 00:20:03.967 "num_base_bdevs_operational": 2, 00:20:03.967 "base_bdevs_list": [ 00:20:03.967 { 00:20:03.967 "name": "BaseBdev1", 00:20:03.967 "uuid": "057954c7-1f84-4e52-b2a9-293cfd1a368f", 00:20:03.967 "is_configured": true, 00:20:03.967 "data_offset": 256, 00:20:03.967 "data_size": 7936 00:20:03.967 }, 00:20:03.967 { 00:20:03.967 "name": "BaseBdev2", 00:20:03.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.967 "is_configured": false, 00:20:03.967 "data_offset": 0, 00:20:03.967 "data_size": 0 00:20:03.967 } 00:20:03.967 ] 00:20:03.967 }' 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.967 14:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:04.225 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:20:04.225 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.225 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:04.484 [2024-11-20 14:30:43.232581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:04.484 [2024-11-20 14:30:43.232891] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:04.484 [2024-11-20 14:30:43.232922] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:04.484 [2024-11-20 14:30:43.233045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:04.484 BaseBdev2 00:20:04.484 [2024-11-20 14:30:43.233222] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:04.484 [2024-11-20 14:30:43.233250] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:04.484 [2024-11-20 14:30:43.233378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:04.485 [ 00:20:04.485 { 00:20:04.485 "name": "BaseBdev2", 00:20:04.485 "aliases": [ 00:20:04.485 "de2f59cd-d30c-4f75-b195-69e423287beb" 00:20:04.485 ], 00:20:04.485 "product_name": "Malloc disk", 00:20:04.485 "block_size": 4096, 00:20:04.485 "num_blocks": 8192, 00:20:04.485 "uuid": "de2f59cd-d30c-4f75-b195-69e423287beb", 00:20:04.485 "md_size": 32, 00:20:04.485 "md_interleave": false, 00:20:04.485 "dif_type": 0, 00:20:04.485 "assigned_rate_limits": { 00:20:04.485 "rw_ios_per_sec": 0, 00:20:04.485 "rw_mbytes_per_sec": 0, 00:20:04.485 "r_mbytes_per_sec": 0, 00:20:04.485 "w_mbytes_per_sec": 0 00:20:04.485 }, 00:20:04.485 "claimed": true, 00:20:04.485 "claim_type": "exclusive_write", 00:20:04.485 "zoned": false, 00:20:04.485 "supported_io_types": { 00:20:04.485 "read": true, 00:20:04.485 "write": true, 00:20:04.485 "unmap": true, 00:20:04.485 "flush": true, 00:20:04.485 "reset": true, 00:20:04.485 "nvme_admin": false, 00:20:04.485 "nvme_io": false, 00:20:04.485 "nvme_io_md": false, 00:20:04.485 "write_zeroes": true, 00:20:04.485 "zcopy": true, 00:20:04.485 "get_zone_info": false, 00:20:04.485 "zone_management": false, 00:20:04.485 "zone_append": false, 00:20:04.485 "compare": false, 00:20:04.485 "compare_and_write": false, 00:20:04.485 "abort": true, 00:20:04.485 "seek_hole": false, 00:20:04.485 "seek_data": false, 00:20:04.485 "copy": true, 00:20:04.485 "nvme_iov_md": false 00:20:04.485 }, 00:20:04.485 "memory_domains": [ 00:20:04.485 { 00:20:04.485 "dma_device_id": "system", 00:20:04.485 "dma_device_type": 1 00:20:04.485 }, 00:20:04.485 { 00:20:04.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.485 "dma_device_type": 2 00:20:04.485 } 00:20:04.485 ], 00:20:04.485 "driver_specific": {} 00:20:04.485 } 00:20:04.485 ] 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.485 "name": "Existed_Raid", 00:20:04.485 "uuid": "c3edda91-5795-40c7-b6f2-55ca8ee2340a", 00:20:04.485 "strip_size_kb": 0, 00:20:04.485 "state": "online", 00:20:04.485 "raid_level": "raid1", 00:20:04.485 "superblock": true, 00:20:04.485 "num_base_bdevs": 2, 00:20:04.485 "num_base_bdevs_discovered": 2, 00:20:04.485 "num_base_bdevs_operational": 2, 00:20:04.485 "base_bdevs_list": [ 00:20:04.485 { 00:20:04.485 "name": "BaseBdev1", 00:20:04.485 "uuid": "057954c7-1f84-4e52-b2a9-293cfd1a368f", 00:20:04.485 "is_configured": true, 00:20:04.485 "data_offset": 256, 00:20:04.485 "data_size": 7936 00:20:04.485 }, 00:20:04.485 { 00:20:04.485 "name": "BaseBdev2", 00:20:04.485 "uuid": "de2f59cd-d30c-4f75-b195-69e423287beb", 00:20:04.485 "is_configured": true, 00:20:04.485 "data_offset": 256, 00:20:04.485 "data_size": 7936 00:20:04.485 } 00:20:04.485 ] 00:20:04.485 }' 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.485 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:05.054 [2024-11-20 14:30:43.749184] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:05.054 "name": "Existed_Raid", 00:20:05.054 "aliases": [ 00:20:05.054 "c3edda91-5795-40c7-b6f2-55ca8ee2340a" 00:20:05.054 ], 00:20:05.054 "product_name": "Raid Volume", 00:20:05.054 "block_size": 4096, 00:20:05.054 "num_blocks": 7936, 00:20:05.054 "uuid": "c3edda91-5795-40c7-b6f2-55ca8ee2340a", 00:20:05.054 "md_size": 32, 00:20:05.054 "md_interleave": false, 00:20:05.054 "dif_type": 0, 00:20:05.054 "assigned_rate_limits": { 00:20:05.054 "rw_ios_per_sec": 0, 00:20:05.054 "rw_mbytes_per_sec": 0, 00:20:05.054 "r_mbytes_per_sec": 0, 00:20:05.054 "w_mbytes_per_sec": 0 00:20:05.054 }, 00:20:05.054 "claimed": false, 00:20:05.054 "zoned": false, 00:20:05.054 "supported_io_types": { 00:20:05.054 "read": true, 00:20:05.054 "write": true, 00:20:05.054 "unmap": false, 00:20:05.054 "flush": false, 00:20:05.054 "reset": true, 00:20:05.054 "nvme_admin": false, 00:20:05.054 "nvme_io": false, 00:20:05.054 "nvme_io_md": false, 00:20:05.054 "write_zeroes": true, 00:20:05.054 "zcopy": false, 00:20:05.054 "get_zone_info": false, 00:20:05.054 "zone_management": false, 00:20:05.054 "zone_append": false, 00:20:05.054 "compare": false, 00:20:05.054 "compare_and_write": false, 00:20:05.054 "abort": false, 00:20:05.054 "seek_hole": false, 00:20:05.054 "seek_data": false, 00:20:05.054 "copy": false, 00:20:05.054 "nvme_iov_md": false 00:20:05.054 }, 00:20:05.054 "memory_domains": [ 00:20:05.054 { 00:20:05.054 "dma_device_id": "system", 00:20:05.054 "dma_device_type": 1 00:20:05.054 }, 00:20:05.054 { 00:20:05.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.054 "dma_device_type": 2 00:20:05.054 }, 00:20:05.054 { 00:20:05.054 "dma_device_id": "system", 00:20:05.054 "dma_device_type": 1 00:20:05.054 }, 00:20:05.054 { 00:20:05.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.054 "dma_device_type": 2 00:20:05.054 } 00:20:05.054 ], 00:20:05.054 "driver_specific": { 00:20:05.054 "raid": { 00:20:05.054 "uuid": "c3edda91-5795-40c7-b6f2-55ca8ee2340a", 00:20:05.054 "strip_size_kb": 0, 00:20:05.054 "state": "online", 00:20:05.054 "raid_level": "raid1", 00:20:05.054 "superblock": true, 00:20:05.054 "num_base_bdevs": 2, 00:20:05.054 "num_base_bdevs_discovered": 2, 00:20:05.054 "num_base_bdevs_operational": 2, 00:20:05.054 "base_bdevs_list": [ 00:20:05.054 { 00:20:05.054 "name": "BaseBdev1", 00:20:05.054 "uuid": "057954c7-1f84-4e52-b2a9-293cfd1a368f", 00:20:05.054 "is_configured": true, 00:20:05.054 "data_offset": 256, 00:20:05.054 "data_size": 7936 00:20:05.054 }, 00:20:05.054 { 00:20:05.054 "name": "BaseBdev2", 00:20:05.054 "uuid": "de2f59cd-d30c-4f75-b195-69e423287beb", 00:20:05.054 "is_configured": true, 00:20:05.054 "data_offset": 256, 00:20:05.054 "data_size": 7936 00:20:05.054 } 00:20:05.054 ] 00:20:05.054 } 00:20:05.054 } 00:20:05.054 }' 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:05.054 BaseBdev2' 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.054 14:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.054 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:05.054 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:05.054 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:05.054 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.054 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.054 [2024-11-20 14:30:44.020920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:05.314 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.314 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:05.314 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:05.314 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:05.314 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:20:05.314 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:05.314 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:05.314 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:05.314 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:05.314 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:05.314 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:05.314 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:05.314 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.314 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.314 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.314 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.314 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:05.314 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.314 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.314 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.314 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.314 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.314 "name": "Existed_Raid", 00:20:05.314 "uuid": "c3edda91-5795-40c7-b6f2-55ca8ee2340a", 00:20:05.314 "strip_size_kb": 0, 00:20:05.314 "state": "online", 00:20:05.314 "raid_level": "raid1", 00:20:05.314 "superblock": true, 00:20:05.314 "num_base_bdevs": 2, 00:20:05.314 "num_base_bdevs_discovered": 1, 00:20:05.314 "num_base_bdevs_operational": 1, 00:20:05.314 "base_bdevs_list": [ 00:20:05.314 { 00:20:05.314 "name": null, 00:20:05.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.314 "is_configured": false, 00:20:05.314 "data_offset": 0, 00:20:05.314 "data_size": 7936 00:20:05.314 }, 00:20:05.314 { 00:20:05.314 "name": "BaseBdev2", 00:20:05.314 "uuid": "de2f59cd-d30c-4f75-b195-69e423287beb", 00:20:05.314 "is_configured": true, 00:20:05.314 "data_offset": 256, 00:20:05.314 "data_size": 7936 00:20:05.314 } 00:20:05.314 ] 00:20:05.314 }' 00:20:05.314 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.314 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.881 [2024-11-20 14:30:44.663570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:05.881 [2024-11-20 14:30:44.663709] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:05.881 [2024-11-20 14:30:44.759135] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:05.881 [2024-11-20 14:30:44.759220] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:05.881 [2024-11-20 14:30:44.759242] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87675 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87675 ']' 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87675 00:20:05.881 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:20:05.882 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.882 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87675 00:20:05.882 killing process with pid 87675 00:20:05.882 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:05.882 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:05.882 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87675' 00:20:05.882 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87675 00:20:05.882 [2024-11-20 14:30:44.846858] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:05.882 14:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87675 00:20:05.882 [2024-11-20 14:30:44.861790] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:07.257 14:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:20:07.257 00:20:07.257 real 0m5.564s 00:20:07.257 user 0m8.379s 00:20:07.257 sys 0m0.785s 00:20:07.257 14:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.257 14:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:07.257 ************************************ 00:20:07.257 END TEST raid_state_function_test_sb_md_separate 00:20:07.257 ************************************ 00:20:07.257 14:30:45 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:20:07.257 14:30:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:07.257 14:30:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.257 14:30:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:07.257 ************************************ 00:20:07.257 START TEST raid_superblock_test_md_separate 00:20:07.257 ************************************ 00:20:07.257 14:30:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:20:07.257 14:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:07.257 14:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:07.257 14:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:07.257 14:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:07.257 14:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:07.257 14:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:07.257 14:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:07.257 14:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:07.257 14:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:07.257 14:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:07.257 14:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:07.257 14:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:07.257 14:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:07.257 14:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:07.257 14:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:07.257 14:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87923 00:20:07.257 14:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87923 00:20:07.257 14:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:07.257 14:30:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87923 ']' 00:20:07.257 14:30:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.258 14:30:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.258 14:30:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.258 14:30:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.258 14:30:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:07.258 [2024-11-20 14:30:46.120078] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:20:07.258 [2024-11-20 14:30:46.120313] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87923 ] 00:20:07.516 [2024-11-20 14:30:46.298405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.516 [2024-11-20 14:30:46.432146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.774 [2024-11-20 14:30:46.637116] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:07.774 [2024-11-20 14:30:46.637229] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.342 malloc1 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.342 [2024-11-20 14:30:47.167741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:08.342 [2024-11-20 14:30:47.167812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.342 [2024-11-20 14:30:47.167847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:08.342 [2024-11-20 14:30:47.167864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.342 [2024-11-20 14:30:47.170332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.342 [2024-11-20 14:30:47.170382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:08.342 pt1 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.342 malloc2 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.342 [2024-11-20 14:30:47.225357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:08.342 [2024-11-20 14:30:47.225431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.342 [2024-11-20 14:30:47.225464] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:08.342 [2024-11-20 14:30:47.225480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.342 [2024-11-20 14:30:47.228028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.342 [2024-11-20 14:30:47.228078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:08.342 pt2 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:08.342 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.343 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.343 [2024-11-20 14:30:47.237342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:08.343 [2024-11-20 14:30:47.239747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:08.343 [2024-11-20 14:30:47.240011] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:08.343 [2024-11-20 14:30:47.240046] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:08.343 [2024-11-20 14:30:47.240148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:08.343 [2024-11-20 14:30:47.240315] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:08.343 [2024-11-20 14:30:47.240345] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:08.343 [2024-11-20 14:30:47.240476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.343 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.343 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:08.343 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.343 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.343 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:08.343 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:08.343 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:08.343 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.343 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.343 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.343 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.343 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.343 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.343 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.343 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.343 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.343 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.343 "name": "raid_bdev1", 00:20:08.343 "uuid": "95a43d0b-03d0-473f-a638-db05f5e75260", 00:20:08.343 "strip_size_kb": 0, 00:20:08.343 "state": "online", 00:20:08.343 "raid_level": "raid1", 00:20:08.343 "superblock": true, 00:20:08.343 "num_base_bdevs": 2, 00:20:08.343 "num_base_bdevs_discovered": 2, 00:20:08.343 "num_base_bdevs_operational": 2, 00:20:08.343 "base_bdevs_list": [ 00:20:08.343 { 00:20:08.343 "name": "pt1", 00:20:08.343 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:08.343 "is_configured": true, 00:20:08.343 "data_offset": 256, 00:20:08.343 "data_size": 7936 00:20:08.343 }, 00:20:08.343 { 00:20:08.343 "name": "pt2", 00:20:08.343 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:08.343 "is_configured": true, 00:20:08.343 "data_offset": 256, 00:20:08.343 "data_size": 7936 00:20:08.343 } 00:20:08.343 ] 00:20:08.343 }' 00:20:08.343 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.343 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.909 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:08.910 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:08.910 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:08.910 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:08.910 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:08.910 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:08.910 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:08.910 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:08.910 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.910 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.910 [2024-11-20 14:30:47.745819] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:08.910 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.910 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:08.910 "name": "raid_bdev1", 00:20:08.910 "aliases": [ 00:20:08.910 "95a43d0b-03d0-473f-a638-db05f5e75260" 00:20:08.910 ], 00:20:08.910 "product_name": "Raid Volume", 00:20:08.910 "block_size": 4096, 00:20:08.910 "num_blocks": 7936, 00:20:08.910 "uuid": "95a43d0b-03d0-473f-a638-db05f5e75260", 00:20:08.910 "md_size": 32, 00:20:08.910 "md_interleave": false, 00:20:08.910 "dif_type": 0, 00:20:08.910 "assigned_rate_limits": { 00:20:08.910 "rw_ios_per_sec": 0, 00:20:08.910 "rw_mbytes_per_sec": 0, 00:20:08.910 "r_mbytes_per_sec": 0, 00:20:08.910 "w_mbytes_per_sec": 0 00:20:08.910 }, 00:20:08.910 "claimed": false, 00:20:08.910 "zoned": false, 00:20:08.910 "supported_io_types": { 00:20:08.910 "read": true, 00:20:08.910 "write": true, 00:20:08.910 "unmap": false, 00:20:08.910 "flush": false, 00:20:08.910 "reset": true, 00:20:08.910 "nvme_admin": false, 00:20:08.910 "nvme_io": false, 00:20:08.910 "nvme_io_md": false, 00:20:08.910 "write_zeroes": true, 00:20:08.910 "zcopy": false, 00:20:08.910 "get_zone_info": false, 00:20:08.910 "zone_management": false, 00:20:08.910 "zone_append": false, 00:20:08.910 "compare": false, 00:20:08.910 "compare_and_write": false, 00:20:08.910 "abort": false, 00:20:08.910 "seek_hole": false, 00:20:08.910 "seek_data": false, 00:20:08.910 "copy": false, 00:20:08.910 "nvme_iov_md": false 00:20:08.910 }, 00:20:08.910 "memory_domains": [ 00:20:08.910 { 00:20:08.910 "dma_device_id": "system", 00:20:08.910 "dma_device_type": 1 00:20:08.910 }, 00:20:08.910 { 00:20:08.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.910 "dma_device_type": 2 00:20:08.910 }, 00:20:08.910 { 00:20:08.910 "dma_device_id": "system", 00:20:08.910 "dma_device_type": 1 00:20:08.910 }, 00:20:08.910 { 00:20:08.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.910 "dma_device_type": 2 00:20:08.910 } 00:20:08.910 ], 00:20:08.910 "driver_specific": { 00:20:08.910 "raid": { 00:20:08.910 "uuid": "95a43d0b-03d0-473f-a638-db05f5e75260", 00:20:08.910 "strip_size_kb": 0, 00:20:08.910 "state": "online", 00:20:08.910 "raid_level": "raid1", 00:20:08.910 "superblock": true, 00:20:08.910 "num_base_bdevs": 2, 00:20:08.910 "num_base_bdevs_discovered": 2, 00:20:08.910 "num_base_bdevs_operational": 2, 00:20:08.910 "base_bdevs_list": [ 00:20:08.910 { 00:20:08.910 "name": "pt1", 00:20:08.910 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:08.910 "is_configured": true, 00:20:08.910 "data_offset": 256, 00:20:08.910 "data_size": 7936 00:20:08.910 }, 00:20:08.910 { 00:20:08.910 "name": "pt2", 00:20:08.910 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:08.910 "is_configured": true, 00:20:08.910 "data_offset": 256, 00:20:08.910 "data_size": 7936 00:20:08.910 } 00:20:08.910 ] 00:20:08.910 } 00:20:08.910 } 00:20:08.910 }' 00:20:08.910 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:08.910 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:08.910 pt2' 00:20:08.910 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:08.910 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:08.910 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:08.910 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:08.910 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.910 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.910 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:09.170 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.170 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:09.170 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:09.170 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:09.170 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:09.170 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:09.170 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.170 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.170 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.170 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:09.170 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:09.170 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:09.170 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.170 14:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.170 14:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:09.170 [2024-11-20 14:30:47.997848] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=95a43d0b-03d0-473f-a638-db05f5e75260 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 95a43d0b-03d0-473f-a638-db05f5e75260 ']' 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.170 [2024-11-20 14:30:48.049498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:09.170 [2024-11-20 14:30:48.049531] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:09.170 [2024-11-20 14:30:48.049636] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:09.170 [2024-11-20 14:30:48.049712] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:09.170 [2024-11-20 14:30:48.049741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:09.170 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.430 [2024-11-20 14:30:48.185551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:09.430 [2024-11-20 14:30:48.188063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:09.430 [2024-11-20 14:30:48.188175] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:09.430 [2024-11-20 14:30:48.188257] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:09.430 [2024-11-20 14:30:48.188285] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:09.430 [2024-11-20 14:30:48.188301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:09.430 request: 00:20:09.430 { 00:20:09.430 "name": "raid_bdev1", 00:20:09.430 "raid_level": "raid1", 00:20:09.430 "base_bdevs": [ 00:20:09.430 "malloc1", 00:20:09.430 "malloc2" 00:20:09.430 ], 00:20:09.430 "superblock": false, 00:20:09.430 "method": "bdev_raid_create", 00:20:09.430 "req_id": 1 00:20:09.430 } 00:20:09.430 Got JSON-RPC error response 00:20:09.430 response: 00:20:09.430 { 00:20:09.430 "code": -17, 00:20:09.430 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:09.430 } 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.430 [2024-11-20 14:30:48.249547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:09.430 [2024-11-20 14:30:48.249614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.430 [2024-11-20 14:30:48.249641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:09.430 [2024-11-20 14:30:48.249658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.430 [2024-11-20 14:30:48.252262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.430 [2024-11-20 14:30:48.252312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:09.430 [2024-11-20 14:30:48.252377] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:09.430 [2024-11-20 14:30:48.252448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:09.430 pt1 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.430 "name": "raid_bdev1", 00:20:09.430 "uuid": "95a43d0b-03d0-473f-a638-db05f5e75260", 00:20:09.430 "strip_size_kb": 0, 00:20:09.430 "state": "configuring", 00:20:09.430 "raid_level": "raid1", 00:20:09.430 "superblock": true, 00:20:09.430 "num_base_bdevs": 2, 00:20:09.430 "num_base_bdevs_discovered": 1, 00:20:09.430 "num_base_bdevs_operational": 2, 00:20:09.430 "base_bdevs_list": [ 00:20:09.430 { 00:20:09.430 "name": "pt1", 00:20:09.430 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:09.430 "is_configured": true, 00:20:09.430 "data_offset": 256, 00:20:09.430 "data_size": 7936 00:20:09.430 }, 00:20:09.430 { 00:20:09.430 "name": null, 00:20:09.430 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:09.430 "is_configured": false, 00:20:09.430 "data_offset": 256, 00:20:09.430 "data_size": 7936 00:20:09.430 } 00:20:09.430 ] 00:20:09.430 }' 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.430 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.997 [2024-11-20 14:30:48.725699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:09.997 [2024-11-20 14:30:48.725806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.997 [2024-11-20 14:30:48.725846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:09.997 [2024-11-20 14:30:48.725864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.997 [2024-11-20 14:30:48.726151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.997 [2024-11-20 14:30:48.726190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:09.997 [2024-11-20 14:30:48.726260] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:09.997 [2024-11-20 14:30:48.726296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:09.997 [2024-11-20 14:30:48.726445] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:09.997 [2024-11-20 14:30:48.726474] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:09.997 [2024-11-20 14:30:48.726565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:09.997 [2024-11-20 14:30:48.726724] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:09.997 [2024-11-20 14:30:48.726748] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:09.997 [2024-11-20 14:30:48.726881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.997 pt2 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.997 "name": "raid_bdev1", 00:20:09.997 "uuid": "95a43d0b-03d0-473f-a638-db05f5e75260", 00:20:09.997 "strip_size_kb": 0, 00:20:09.997 "state": "online", 00:20:09.997 "raid_level": "raid1", 00:20:09.997 "superblock": true, 00:20:09.997 "num_base_bdevs": 2, 00:20:09.997 "num_base_bdevs_discovered": 2, 00:20:09.997 "num_base_bdevs_operational": 2, 00:20:09.997 "base_bdevs_list": [ 00:20:09.997 { 00:20:09.997 "name": "pt1", 00:20:09.997 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:09.997 "is_configured": true, 00:20:09.997 "data_offset": 256, 00:20:09.997 "data_size": 7936 00:20:09.997 }, 00:20:09.997 { 00:20:09.997 "name": "pt2", 00:20:09.997 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:09.997 "is_configured": true, 00:20:09.997 "data_offset": 256, 00:20:09.997 "data_size": 7936 00:20:09.997 } 00:20:09.997 ] 00:20:09.997 }' 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.997 14:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.255 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:10.255 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:10.255 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:10.255 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:10.255 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:10.255 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:10.256 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:10.256 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:10.256 14:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.256 14:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.256 [2024-11-20 14:30:49.226220] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:10.541 14:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.541 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:10.541 "name": "raid_bdev1", 00:20:10.541 "aliases": [ 00:20:10.541 "95a43d0b-03d0-473f-a638-db05f5e75260" 00:20:10.541 ], 00:20:10.541 "product_name": "Raid Volume", 00:20:10.541 "block_size": 4096, 00:20:10.541 "num_blocks": 7936, 00:20:10.541 "uuid": "95a43d0b-03d0-473f-a638-db05f5e75260", 00:20:10.541 "md_size": 32, 00:20:10.541 "md_interleave": false, 00:20:10.541 "dif_type": 0, 00:20:10.541 "assigned_rate_limits": { 00:20:10.541 "rw_ios_per_sec": 0, 00:20:10.541 "rw_mbytes_per_sec": 0, 00:20:10.541 "r_mbytes_per_sec": 0, 00:20:10.541 "w_mbytes_per_sec": 0 00:20:10.541 }, 00:20:10.541 "claimed": false, 00:20:10.541 "zoned": false, 00:20:10.541 "supported_io_types": { 00:20:10.541 "read": true, 00:20:10.541 "write": true, 00:20:10.541 "unmap": false, 00:20:10.541 "flush": false, 00:20:10.541 "reset": true, 00:20:10.541 "nvme_admin": false, 00:20:10.541 "nvme_io": false, 00:20:10.541 "nvme_io_md": false, 00:20:10.541 "write_zeroes": true, 00:20:10.541 "zcopy": false, 00:20:10.541 "get_zone_info": false, 00:20:10.541 "zone_management": false, 00:20:10.541 "zone_append": false, 00:20:10.541 "compare": false, 00:20:10.541 "compare_and_write": false, 00:20:10.541 "abort": false, 00:20:10.541 "seek_hole": false, 00:20:10.541 "seek_data": false, 00:20:10.541 "copy": false, 00:20:10.541 "nvme_iov_md": false 00:20:10.541 }, 00:20:10.541 "memory_domains": [ 00:20:10.541 { 00:20:10.542 "dma_device_id": "system", 00:20:10.542 "dma_device_type": 1 00:20:10.542 }, 00:20:10.542 { 00:20:10.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.542 "dma_device_type": 2 00:20:10.542 }, 00:20:10.542 { 00:20:10.542 "dma_device_id": "system", 00:20:10.542 "dma_device_type": 1 00:20:10.542 }, 00:20:10.542 { 00:20:10.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.542 "dma_device_type": 2 00:20:10.542 } 00:20:10.542 ], 00:20:10.542 "driver_specific": { 00:20:10.542 "raid": { 00:20:10.542 "uuid": "95a43d0b-03d0-473f-a638-db05f5e75260", 00:20:10.542 "strip_size_kb": 0, 00:20:10.542 "state": "online", 00:20:10.542 "raid_level": "raid1", 00:20:10.542 "superblock": true, 00:20:10.542 "num_base_bdevs": 2, 00:20:10.542 "num_base_bdevs_discovered": 2, 00:20:10.542 "num_base_bdevs_operational": 2, 00:20:10.542 "base_bdevs_list": [ 00:20:10.542 { 00:20:10.542 "name": "pt1", 00:20:10.542 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:10.542 "is_configured": true, 00:20:10.542 "data_offset": 256, 00:20:10.542 "data_size": 7936 00:20:10.542 }, 00:20:10.542 { 00:20:10.542 "name": "pt2", 00:20:10.542 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:10.542 "is_configured": true, 00:20:10.542 "data_offset": 256, 00:20:10.542 "data_size": 7936 00:20:10.542 } 00:20:10.542 ] 00:20:10.542 } 00:20:10.542 } 00:20:10.542 }' 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:10.542 pt2' 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:10.542 [2024-11-20 14:30:49.478217] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 95a43d0b-03d0-473f-a638-db05f5e75260 '!=' 95a43d0b-03d0-473f-a638-db05f5e75260 ']' 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:20:10.542 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:10.805 14:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.805 14:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.805 [2024-11-20 14:30:49.525908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:10.805 14:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.805 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:10.805 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:10.805 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:10.805 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:10.805 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:10.805 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:10.805 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.805 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.805 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.805 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.805 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.805 14:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.805 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.805 14:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.805 14:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.805 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.805 "name": "raid_bdev1", 00:20:10.805 "uuid": "95a43d0b-03d0-473f-a638-db05f5e75260", 00:20:10.805 "strip_size_kb": 0, 00:20:10.805 "state": "online", 00:20:10.805 "raid_level": "raid1", 00:20:10.805 "superblock": true, 00:20:10.805 "num_base_bdevs": 2, 00:20:10.805 "num_base_bdevs_discovered": 1, 00:20:10.805 "num_base_bdevs_operational": 1, 00:20:10.805 "base_bdevs_list": [ 00:20:10.805 { 00:20:10.805 "name": null, 00:20:10.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.805 "is_configured": false, 00:20:10.805 "data_offset": 0, 00:20:10.805 "data_size": 7936 00:20:10.805 }, 00:20:10.805 { 00:20:10.805 "name": "pt2", 00:20:10.805 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:10.805 "is_configured": true, 00:20:10.805 "data_offset": 256, 00:20:10.805 "data_size": 7936 00:20:10.805 } 00:20:10.805 ] 00:20:10.805 }' 00:20:10.805 14:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.805 14:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.371 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:11.371 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.371 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.371 [2024-11-20 14:30:50.054029] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:11.371 [2024-11-20 14:30:50.054076] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:11.371 [2024-11-20 14:30:50.054171] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:11.371 [2024-11-20 14:30:50.054236] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:11.371 [2024-11-20 14:30:50.054255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:11.371 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.371 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.372 [2024-11-20 14:30:50.138044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:11.372 [2024-11-20 14:30:50.138119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:11.372 [2024-11-20 14:30:50.138145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:11.372 [2024-11-20 14:30:50.138161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:11.372 [2024-11-20 14:30:50.140843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:11.372 [2024-11-20 14:30:50.140891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:11.372 [2024-11-20 14:30:50.140971] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:11.372 [2024-11-20 14:30:50.141055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:11.372 [2024-11-20 14:30:50.141205] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:11.372 [2024-11-20 14:30:50.141232] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:11.372 [2024-11-20 14:30:50.141322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:11.372 [2024-11-20 14:30:50.141477] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:11.372 [2024-11-20 14:30:50.141500] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:11.372 [2024-11-20 14:30:50.141631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:11.372 pt2 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.372 "name": "raid_bdev1", 00:20:11.372 "uuid": "95a43d0b-03d0-473f-a638-db05f5e75260", 00:20:11.372 "strip_size_kb": 0, 00:20:11.372 "state": "online", 00:20:11.372 "raid_level": "raid1", 00:20:11.372 "superblock": true, 00:20:11.372 "num_base_bdevs": 2, 00:20:11.372 "num_base_bdevs_discovered": 1, 00:20:11.372 "num_base_bdevs_operational": 1, 00:20:11.372 "base_bdevs_list": [ 00:20:11.372 { 00:20:11.372 "name": null, 00:20:11.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.372 "is_configured": false, 00:20:11.372 "data_offset": 256, 00:20:11.372 "data_size": 7936 00:20:11.372 }, 00:20:11.372 { 00:20:11.372 "name": "pt2", 00:20:11.372 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:11.372 "is_configured": true, 00:20:11.372 "data_offset": 256, 00:20:11.372 "data_size": 7936 00:20:11.372 } 00:20:11.372 ] 00:20:11.372 }' 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.372 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.939 [2024-11-20 14:30:50.678182] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:11.939 [2024-11-20 14:30:50.678223] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:11.939 [2024-11-20 14:30:50.678311] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:11.939 [2024-11-20 14:30:50.678382] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:11.939 [2024-11-20 14:30:50.678398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.939 [2024-11-20 14:30:50.746235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:11.939 [2024-11-20 14:30:50.746307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:11.939 [2024-11-20 14:30:50.746337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:11.939 [2024-11-20 14:30:50.746352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:11.939 [2024-11-20 14:30:50.749042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:11.939 [2024-11-20 14:30:50.749085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:11.939 [2024-11-20 14:30:50.749162] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:11.939 [2024-11-20 14:30:50.749221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:11.939 [2024-11-20 14:30:50.749389] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:11.939 [2024-11-20 14:30:50.749413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:11.939 [2024-11-20 14:30:50.749439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:11.939 [2024-11-20 14:30:50.749521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:11.939 [2024-11-20 14:30:50.749622] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:11.939 [2024-11-20 14:30:50.749638] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:11.939 [2024-11-20 14:30:50.749717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:11.939 [2024-11-20 14:30:50.749851] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:11.939 [2024-11-20 14:30:50.749880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:11.939 [2024-11-20 14:30:50.750026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:11.939 pt1 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.939 "name": "raid_bdev1", 00:20:11.939 "uuid": "95a43d0b-03d0-473f-a638-db05f5e75260", 00:20:11.939 "strip_size_kb": 0, 00:20:11.939 "state": "online", 00:20:11.939 "raid_level": "raid1", 00:20:11.939 "superblock": true, 00:20:11.939 "num_base_bdevs": 2, 00:20:11.939 "num_base_bdevs_discovered": 1, 00:20:11.939 "num_base_bdevs_operational": 1, 00:20:11.939 "base_bdevs_list": [ 00:20:11.939 { 00:20:11.939 "name": null, 00:20:11.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.939 "is_configured": false, 00:20:11.939 "data_offset": 256, 00:20:11.939 "data_size": 7936 00:20:11.939 }, 00:20:11.939 { 00:20:11.939 "name": "pt2", 00:20:11.939 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:11.939 "is_configured": true, 00:20:11.939 "data_offset": 256, 00:20:11.939 "data_size": 7936 00:20:11.939 } 00:20:11.939 ] 00:20:11.939 }' 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.939 14:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:12.507 14:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:12.507 14:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.507 14:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:12.507 14:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:12.507 14:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.507 14:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:12.507 14:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:12.507 14:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:12.507 14:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.507 14:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:12.507 [2024-11-20 14:30:51.362687] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:12.507 14:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.507 14:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 95a43d0b-03d0-473f-a638-db05f5e75260 '!=' 95a43d0b-03d0-473f-a638-db05f5e75260 ']' 00:20:12.507 14:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87923 00:20:12.507 14:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87923 ']' 00:20:12.507 14:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87923 00:20:12.507 14:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:20:12.507 14:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.507 14:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87923 00:20:12.507 14:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:12.507 14:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:12.507 killing process with pid 87923 00:20:12.507 14:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87923' 00:20:12.507 14:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87923 00:20:12.507 [2024-11-20 14:30:51.445679] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:12.507 14:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87923 00:20:12.507 [2024-11-20 14:30:51.445804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:12.507 [2024-11-20 14:30:51.445885] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:12.507 [2024-11-20 14:30:51.445912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:12.768 [2024-11-20 14:30:51.649003] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:13.787 14:30:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:20:13.787 00:20:13.787 real 0m6.729s 00:20:13.787 user 0m10.643s 00:20:13.787 sys 0m0.975s 00:20:13.787 14:30:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:13.787 14:30:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:13.787 ************************************ 00:20:13.787 END TEST raid_superblock_test_md_separate 00:20:13.787 ************************************ 00:20:13.787 14:30:52 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:20:13.787 14:30:52 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:20:13.787 14:30:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:13.787 14:30:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:13.787 14:30:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:13.787 ************************************ 00:20:13.787 START TEST raid_rebuild_test_sb_md_separate 00:20:13.787 ************************************ 00:20:13.787 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:20:13.787 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:13.787 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:13.787 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:13.787 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:13.787 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:13.787 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:13.787 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:13.787 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:13.787 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:13.787 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:13.787 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:13.787 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:13.787 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:13.787 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:13.787 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:13.787 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:14.046 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:14.046 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:14.046 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:14.046 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:14.046 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:14.046 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:14.046 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:14.046 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:14.046 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88257 00:20:14.046 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:14.046 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88257 00:20:14.046 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88257 ']' 00:20:14.046 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.046 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.046 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.046 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.046 14:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:14.046 [2024-11-20 14:30:52.879651] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:20:14.046 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:14.046 Zero copy mechanism will not be used. 00:20:14.046 [2024-11-20 14:30:52.879824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88257 ] 00:20:14.304 [2024-11-20 14:30:53.057743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.304 [2024-11-20 14:30:53.192103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.562 [2024-11-20 14:30:53.397161] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:14.562 [2024-11-20 14:30:53.397284] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:15.130 14:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.130 14:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:20:15.130 14:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:15.130 14:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:20:15.130 14:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.130 14:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.130 BaseBdev1_malloc 00:20:15.130 14:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.130 14:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:15.130 14:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.130 14:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.130 [2024-11-20 14:30:53.952447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:15.130 [2024-11-20 14:30:53.952524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.130 [2024-11-20 14:30:53.952560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:15.130 [2024-11-20 14:30:53.952580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.130 [2024-11-20 14:30:53.955228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.130 [2024-11-20 14:30:53.955278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:15.130 BaseBdev1 00:20:15.130 14:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.130 14:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:15.130 14:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:20:15.130 14:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.130 14:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.130 BaseBdev2_malloc 00:20:15.130 14:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.130 14:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:15.130 14:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.130 14:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.130 [2024-11-20 14:30:54.006650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:15.130 [2024-11-20 14:30:54.006749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.130 [2024-11-20 14:30:54.006783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:15.130 [2024-11-20 14:30:54.006803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.130 [2024-11-20 14:30:54.009447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.130 [2024-11-20 14:30:54.009494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:15.130 BaseBdev2 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.130 spare_malloc 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.130 spare_delay 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.130 [2024-11-20 14:30:54.089962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:15.130 [2024-11-20 14:30:54.090091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.130 [2024-11-20 14:30:54.090131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:15.130 [2024-11-20 14:30:54.090150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.130 [2024-11-20 14:30:54.092850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.130 [2024-11-20 14:30:54.092906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:15.130 spare 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.130 [2024-11-20 14:30:54.102058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:15.130 [2024-11-20 14:30:54.104626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:15.130 [2024-11-20 14:30:54.104921] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:15.130 [2024-11-20 14:30:54.104957] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:15.130 [2024-11-20 14:30:54.105121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:15.130 [2024-11-20 14:30:54.105316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:15.130 [2024-11-20 14:30:54.105342] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:15.130 [2024-11-20 14:30:54.105497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.130 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.389 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.389 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.389 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.389 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.389 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.389 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.389 "name": "raid_bdev1", 00:20:15.389 "uuid": "4572b4e4-768c-46b2-908e-8022d97e8212", 00:20:15.389 "strip_size_kb": 0, 00:20:15.389 "state": "online", 00:20:15.389 "raid_level": "raid1", 00:20:15.389 "superblock": true, 00:20:15.389 "num_base_bdevs": 2, 00:20:15.389 "num_base_bdevs_discovered": 2, 00:20:15.389 "num_base_bdevs_operational": 2, 00:20:15.389 "base_bdevs_list": [ 00:20:15.389 { 00:20:15.389 "name": "BaseBdev1", 00:20:15.389 "uuid": "21ae2d80-da43-5b6c-9e28-80f7d68d37e1", 00:20:15.389 "is_configured": true, 00:20:15.389 "data_offset": 256, 00:20:15.389 "data_size": 7936 00:20:15.389 }, 00:20:15.389 { 00:20:15.389 "name": "BaseBdev2", 00:20:15.389 "uuid": "cd60013f-a531-5340-bdff-c8f5c2a0cf8c", 00:20:15.389 "is_configured": true, 00:20:15.389 "data_offset": 256, 00:20:15.389 "data_size": 7936 00:20:15.389 } 00:20:15.389 ] 00:20:15.389 }' 00:20:15.389 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.389 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.647 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:15.647 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:15.647 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.647 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.647 [2024-11-20 14:30:54.606450] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:15.647 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.906 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:15.906 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.906 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.906 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.906 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:15.906 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.906 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:15.906 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:15.906 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:15.906 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:15.906 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:15.906 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:15.906 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:15.906 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:15.906 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:15.906 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:15.906 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:20:15.906 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:15.906 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:15.906 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:16.165 [2024-11-20 14:30:54.934317] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:16.165 /dev/nbd0 00:20:16.165 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:16.165 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:16.165 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:16.165 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:20:16.165 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:16.165 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:16.165 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:16.165 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:20:16.165 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:16.165 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:16.165 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:16.165 1+0 records in 00:20:16.165 1+0 records out 00:20:16.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344191 s, 11.9 MB/s 00:20:16.165 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:16.165 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:20:16.165 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:16.165 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:16.165 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:20:16.165 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:16.165 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:16.165 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:16.165 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:16.165 14:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:20:17.098 7936+0 records in 00:20:17.098 7936+0 records out 00:20:17.098 32505856 bytes (33 MB, 31 MiB) copied, 0.920082 s, 35.3 MB/s 00:20:17.098 14:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:17.098 14:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:17.098 14:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:17.098 14:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:17.098 14:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:20:17.098 14:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:17.098 14:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:17.357 [2024-11-20 14:30:56.192657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:17.357 [2024-11-20 14:30:56.204803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.357 "name": "raid_bdev1", 00:20:17.357 "uuid": "4572b4e4-768c-46b2-908e-8022d97e8212", 00:20:17.357 "strip_size_kb": 0, 00:20:17.357 "state": "online", 00:20:17.357 "raid_level": "raid1", 00:20:17.357 "superblock": true, 00:20:17.357 "num_base_bdevs": 2, 00:20:17.357 "num_base_bdevs_discovered": 1, 00:20:17.357 "num_base_bdevs_operational": 1, 00:20:17.357 "base_bdevs_list": [ 00:20:17.357 { 00:20:17.357 "name": null, 00:20:17.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.357 "is_configured": false, 00:20:17.357 "data_offset": 0, 00:20:17.357 "data_size": 7936 00:20:17.357 }, 00:20:17.357 { 00:20:17.357 "name": "BaseBdev2", 00:20:17.357 "uuid": "cd60013f-a531-5340-bdff-c8f5c2a0cf8c", 00:20:17.357 "is_configured": true, 00:20:17.357 "data_offset": 256, 00:20:17.357 "data_size": 7936 00:20:17.357 } 00:20:17.357 ] 00:20:17.357 }' 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.357 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:17.963 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:17.963 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.963 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:17.963 [2024-11-20 14:30:56.681376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:17.963 [2024-11-20 14:30:56.695239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:20:17.963 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.963 14:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:17.963 [2024-11-20 14:30:56.698116] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:18.897 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:18.897 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:18.897 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:18.897 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:18.897 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:18.897 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.897 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.897 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:18.897 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.897 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.897 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:18.897 "name": "raid_bdev1", 00:20:18.897 "uuid": "4572b4e4-768c-46b2-908e-8022d97e8212", 00:20:18.897 "strip_size_kb": 0, 00:20:18.897 "state": "online", 00:20:18.898 "raid_level": "raid1", 00:20:18.898 "superblock": true, 00:20:18.898 "num_base_bdevs": 2, 00:20:18.898 "num_base_bdevs_discovered": 2, 00:20:18.898 "num_base_bdevs_operational": 2, 00:20:18.898 "process": { 00:20:18.898 "type": "rebuild", 00:20:18.898 "target": "spare", 00:20:18.898 "progress": { 00:20:18.898 "blocks": 2560, 00:20:18.898 "percent": 32 00:20:18.898 } 00:20:18.898 }, 00:20:18.898 "base_bdevs_list": [ 00:20:18.898 { 00:20:18.898 "name": "spare", 00:20:18.898 "uuid": "5de94daf-d14f-51a2-a7f3-d850d56be135", 00:20:18.898 "is_configured": true, 00:20:18.898 "data_offset": 256, 00:20:18.898 "data_size": 7936 00:20:18.898 }, 00:20:18.898 { 00:20:18.898 "name": "BaseBdev2", 00:20:18.898 "uuid": "cd60013f-a531-5340-bdff-c8f5c2a0cf8c", 00:20:18.898 "is_configured": true, 00:20:18.898 "data_offset": 256, 00:20:18.898 "data_size": 7936 00:20:18.898 } 00:20:18.898 ] 00:20:18.898 }' 00:20:18.898 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.898 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:18.898 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:18.898 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:18.898 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:18.898 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.898 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:18.898 [2024-11-20 14:30:57.863541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:19.156 [2024-11-20 14:30:57.907588] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:19.156 [2024-11-20 14:30:57.907713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.156 [2024-11-20 14:30:57.907739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:19.156 [2024-11-20 14:30:57.907761] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:19.156 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.156 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:19.156 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:19.156 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:19.156 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:19.156 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:19.156 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:19.156 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.156 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.156 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.156 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.157 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.157 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.157 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.157 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:19.157 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.157 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.157 "name": "raid_bdev1", 00:20:19.157 "uuid": "4572b4e4-768c-46b2-908e-8022d97e8212", 00:20:19.157 "strip_size_kb": 0, 00:20:19.157 "state": "online", 00:20:19.157 "raid_level": "raid1", 00:20:19.157 "superblock": true, 00:20:19.157 "num_base_bdevs": 2, 00:20:19.157 "num_base_bdevs_discovered": 1, 00:20:19.157 "num_base_bdevs_operational": 1, 00:20:19.157 "base_bdevs_list": [ 00:20:19.157 { 00:20:19.157 "name": null, 00:20:19.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.157 "is_configured": false, 00:20:19.157 "data_offset": 0, 00:20:19.157 "data_size": 7936 00:20:19.157 }, 00:20:19.157 { 00:20:19.157 "name": "BaseBdev2", 00:20:19.157 "uuid": "cd60013f-a531-5340-bdff-c8f5c2a0cf8c", 00:20:19.157 "is_configured": true, 00:20:19.157 "data_offset": 256, 00:20:19.157 "data_size": 7936 00:20:19.157 } 00:20:19.157 ] 00:20:19.157 }' 00:20:19.157 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.157 14:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:19.724 14:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:19.724 14:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.724 14:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:19.724 14:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:19.724 14:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.724 14:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.724 14:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.724 14:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:19.724 14:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.724 14:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.724 14:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.724 "name": "raid_bdev1", 00:20:19.724 "uuid": "4572b4e4-768c-46b2-908e-8022d97e8212", 00:20:19.724 "strip_size_kb": 0, 00:20:19.724 "state": "online", 00:20:19.724 "raid_level": "raid1", 00:20:19.724 "superblock": true, 00:20:19.724 "num_base_bdevs": 2, 00:20:19.724 "num_base_bdevs_discovered": 1, 00:20:19.724 "num_base_bdevs_operational": 1, 00:20:19.724 "base_bdevs_list": [ 00:20:19.724 { 00:20:19.724 "name": null, 00:20:19.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.724 "is_configured": false, 00:20:19.724 "data_offset": 0, 00:20:19.724 "data_size": 7936 00:20:19.724 }, 00:20:19.724 { 00:20:19.724 "name": "BaseBdev2", 00:20:19.724 "uuid": "cd60013f-a531-5340-bdff-c8f5c2a0cf8c", 00:20:19.724 "is_configured": true, 00:20:19.724 "data_offset": 256, 00:20:19.724 "data_size": 7936 00:20:19.724 } 00:20:19.724 ] 00:20:19.724 }' 00:20:19.724 14:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.724 14:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:19.724 14:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.724 14:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:19.724 14:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:19.724 14:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.724 14:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:19.724 [2024-11-20 14:30:58.582809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:19.724 [2024-11-20 14:30:58.595967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:20:19.724 14:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.724 14:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:19.724 [2024-11-20 14:30:58.598574] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:20.658 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.659 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:20.659 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:20.659 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:20.659 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:20.659 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.659 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.659 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.659 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:20.659 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.917 "name": "raid_bdev1", 00:20:20.917 "uuid": "4572b4e4-768c-46b2-908e-8022d97e8212", 00:20:20.917 "strip_size_kb": 0, 00:20:20.917 "state": "online", 00:20:20.917 "raid_level": "raid1", 00:20:20.917 "superblock": true, 00:20:20.917 "num_base_bdevs": 2, 00:20:20.917 "num_base_bdevs_discovered": 2, 00:20:20.917 "num_base_bdevs_operational": 2, 00:20:20.917 "process": { 00:20:20.917 "type": "rebuild", 00:20:20.917 "target": "spare", 00:20:20.917 "progress": { 00:20:20.917 "blocks": 2560, 00:20:20.917 "percent": 32 00:20:20.917 } 00:20:20.917 }, 00:20:20.917 "base_bdevs_list": [ 00:20:20.917 { 00:20:20.917 "name": "spare", 00:20:20.917 "uuid": "5de94daf-d14f-51a2-a7f3-d850d56be135", 00:20:20.917 "is_configured": true, 00:20:20.917 "data_offset": 256, 00:20:20.917 "data_size": 7936 00:20:20.917 }, 00:20:20.917 { 00:20:20.917 "name": "BaseBdev2", 00:20:20.917 "uuid": "cd60013f-a531-5340-bdff-c8f5c2a0cf8c", 00:20:20.917 "is_configured": true, 00:20:20.917 "data_offset": 256, 00:20:20.917 "data_size": 7936 00:20:20.917 } 00:20:20.917 ] 00:20:20.917 }' 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:20.917 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=766 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.917 "name": "raid_bdev1", 00:20:20.917 "uuid": "4572b4e4-768c-46b2-908e-8022d97e8212", 00:20:20.917 "strip_size_kb": 0, 00:20:20.917 "state": "online", 00:20:20.917 "raid_level": "raid1", 00:20:20.917 "superblock": true, 00:20:20.917 "num_base_bdevs": 2, 00:20:20.917 "num_base_bdevs_discovered": 2, 00:20:20.917 "num_base_bdevs_operational": 2, 00:20:20.917 "process": { 00:20:20.917 "type": "rebuild", 00:20:20.917 "target": "spare", 00:20:20.917 "progress": { 00:20:20.917 "blocks": 2816, 00:20:20.917 "percent": 35 00:20:20.917 } 00:20:20.917 }, 00:20:20.917 "base_bdevs_list": [ 00:20:20.917 { 00:20:20.917 "name": "spare", 00:20:20.917 "uuid": "5de94daf-d14f-51a2-a7f3-d850d56be135", 00:20:20.917 "is_configured": true, 00:20:20.917 "data_offset": 256, 00:20:20.917 "data_size": 7936 00:20:20.917 }, 00:20:20.917 { 00:20:20.917 "name": "BaseBdev2", 00:20:20.917 "uuid": "cd60013f-a531-5340-bdff-c8f5c2a0cf8c", 00:20:20.917 "is_configured": true, 00:20:20.917 "data_offset": 256, 00:20:20.917 "data_size": 7936 00:20:20.917 } 00:20:20.917 ] 00:20:20.917 }' 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:20.917 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.175 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:21.175 14:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:22.111 14:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:22.111 14:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:22.111 14:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:22.111 14:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:22.111 14:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:22.111 14:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:22.111 14:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.111 14:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.111 14:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.111 14:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:22.111 14:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.111 14:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:22.111 "name": "raid_bdev1", 00:20:22.111 "uuid": "4572b4e4-768c-46b2-908e-8022d97e8212", 00:20:22.111 "strip_size_kb": 0, 00:20:22.111 "state": "online", 00:20:22.111 "raid_level": "raid1", 00:20:22.111 "superblock": true, 00:20:22.111 "num_base_bdevs": 2, 00:20:22.111 "num_base_bdevs_discovered": 2, 00:20:22.111 "num_base_bdevs_operational": 2, 00:20:22.111 "process": { 00:20:22.111 "type": "rebuild", 00:20:22.111 "target": "spare", 00:20:22.111 "progress": { 00:20:22.111 "blocks": 5888, 00:20:22.111 "percent": 74 00:20:22.111 } 00:20:22.111 }, 00:20:22.111 "base_bdevs_list": [ 00:20:22.111 { 00:20:22.111 "name": "spare", 00:20:22.111 "uuid": "5de94daf-d14f-51a2-a7f3-d850d56be135", 00:20:22.111 "is_configured": true, 00:20:22.111 "data_offset": 256, 00:20:22.111 "data_size": 7936 00:20:22.111 }, 00:20:22.111 { 00:20:22.111 "name": "BaseBdev2", 00:20:22.111 "uuid": "cd60013f-a531-5340-bdff-c8f5c2a0cf8c", 00:20:22.111 "is_configured": true, 00:20:22.111 "data_offset": 256, 00:20:22.111 "data_size": 7936 00:20:22.111 } 00:20:22.111 ] 00:20:22.111 }' 00:20:22.111 14:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:22.111 14:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:22.111 14:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:22.370 14:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:22.370 14:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:22.995 [2024-11-20 14:31:01.723763] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:22.995 [2024-11-20 14:31:01.723882] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:22.995 [2024-11-20 14:31:01.724081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.254 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:23.254 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:23.254 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.254 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:23.254 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:23.254 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.254 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.254 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.254 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:23.254 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.254 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.254 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.254 "name": "raid_bdev1", 00:20:23.254 "uuid": "4572b4e4-768c-46b2-908e-8022d97e8212", 00:20:23.254 "strip_size_kb": 0, 00:20:23.254 "state": "online", 00:20:23.254 "raid_level": "raid1", 00:20:23.254 "superblock": true, 00:20:23.254 "num_base_bdevs": 2, 00:20:23.254 "num_base_bdevs_discovered": 2, 00:20:23.254 "num_base_bdevs_operational": 2, 00:20:23.254 "base_bdevs_list": [ 00:20:23.254 { 00:20:23.254 "name": "spare", 00:20:23.254 "uuid": "5de94daf-d14f-51a2-a7f3-d850d56be135", 00:20:23.254 "is_configured": true, 00:20:23.254 "data_offset": 256, 00:20:23.254 "data_size": 7936 00:20:23.254 }, 00:20:23.254 { 00:20:23.254 "name": "BaseBdev2", 00:20:23.254 "uuid": "cd60013f-a531-5340-bdff-c8f5c2a0cf8c", 00:20:23.254 "is_configured": true, 00:20:23.254 "data_offset": 256, 00:20:23.254 "data_size": 7936 00:20:23.254 } 00:20:23.254 ] 00:20:23.254 }' 00:20:23.254 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.513 "name": "raid_bdev1", 00:20:23.513 "uuid": "4572b4e4-768c-46b2-908e-8022d97e8212", 00:20:23.513 "strip_size_kb": 0, 00:20:23.513 "state": "online", 00:20:23.513 "raid_level": "raid1", 00:20:23.513 "superblock": true, 00:20:23.513 "num_base_bdevs": 2, 00:20:23.513 "num_base_bdevs_discovered": 2, 00:20:23.513 "num_base_bdevs_operational": 2, 00:20:23.513 "base_bdevs_list": [ 00:20:23.513 { 00:20:23.513 "name": "spare", 00:20:23.513 "uuid": "5de94daf-d14f-51a2-a7f3-d850d56be135", 00:20:23.513 "is_configured": true, 00:20:23.513 "data_offset": 256, 00:20:23.513 "data_size": 7936 00:20:23.513 }, 00:20:23.513 { 00:20:23.513 "name": "BaseBdev2", 00:20:23.513 "uuid": "cd60013f-a531-5340-bdff-c8f5c2a0cf8c", 00:20:23.513 "is_configured": true, 00:20:23.513 "data_offset": 256, 00:20:23.513 "data_size": 7936 00:20:23.513 } 00:20:23.513 ] 00:20:23.513 }' 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:23.513 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.772 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.772 "name": "raid_bdev1", 00:20:23.772 "uuid": "4572b4e4-768c-46b2-908e-8022d97e8212", 00:20:23.772 "strip_size_kb": 0, 00:20:23.772 "state": "online", 00:20:23.772 "raid_level": "raid1", 00:20:23.772 "superblock": true, 00:20:23.772 "num_base_bdevs": 2, 00:20:23.772 "num_base_bdevs_discovered": 2, 00:20:23.772 "num_base_bdevs_operational": 2, 00:20:23.772 "base_bdevs_list": [ 00:20:23.772 { 00:20:23.772 "name": "spare", 00:20:23.772 "uuid": "5de94daf-d14f-51a2-a7f3-d850d56be135", 00:20:23.772 "is_configured": true, 00:20:23.772 "data_offset": 256, 00:20:23.772 "data_size": 7936 00:20:23.772 }, 00:20:23.772 { 00:20:23.772 "name": "BaseBdev2", 00:20:23.772 "uuid": "cd60013f-a531-5340-bdff-c8f5c2a0cf8c", 00:20:23.772 "is_configured": true, 00:20:23.772 "data_offset": 256, 00:20:23.772 "data_size": 7936 00:20:23.772 } 00:20:23.772 ] 00:20:23.772 }' 00:20:23.772 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.772 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:24.030 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:24.030 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.030 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:24.030 [2024-11-20 14:31:02.974850] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:24.030 [2024-11-20 14:31:02.975089] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:24.030 [2024-11-20 14:31:02.975249] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:24.030 [2024-11-20 14:31:02.975353] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:24.030 [2024-11-20 14:31:02.975373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:24.030 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.030 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:20:24.030 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.030 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.030 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:24.030 14:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.289 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:24.289 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:24.289 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:24.289 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:24.289 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:24.289 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:24.289 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:24.289 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:24.289 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:24.289 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:20:24.289 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:24.289 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:24.289 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:24.548 /dev/nbd0 00:20:24.548 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:24.548 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:24.548 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:24.548 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:20:24.548 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:24.548 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:24.548 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:24.548 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:20:24.548 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:24.548 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:24.548 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:24.548 1+0 records in 00:20:24.548 1+0 records out 00:20:24.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396409 s, 10.3 MB/s 00:20:24.548 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:24.548 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:20:24.548 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:24.548 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:24.548 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:20:24.548 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:24.548 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:24.548 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:24.807 /dev/nbd1 00:20:24.807 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:24.807 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:24.807 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:24.807 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:20:24.807 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:24.807 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:24.807 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:24.807 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:20:24.807 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:24.807 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:24.807 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:24.807 1+0 records in 00:20:24.807 1+0 records out 00:20:24.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437657 s, 9.4 MB/s 00:20:24.807 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:24.807 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:20:24.807 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:24.807 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:24.807 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:20:24.807 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:24.807 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:24.807 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:25.066 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:25.066 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:25.066 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:25.066 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:25.066 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:20:25.066 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:25.066 14:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:25.325 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:25.325 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:25.325 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:25.325 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:25.325 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:25.325 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:25.325 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:25.325 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:25.325 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:25.325 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:25.584 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:25.584 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:25.584 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:25.584 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:25.584 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:25.584 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:25.584 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:25.584 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:25.584 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:25.584 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:25.584 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.584 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:25.584 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.584 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:25.584 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.584 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:25.584 [2024-11-20 14:31:04.561751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:25.584 [2024-11-20 14:31:04.561833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.584 [2024-11-20 14:31:04.561869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:25.584 [2024-11-20 14:31:04.561885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.584 [2024-11-20 14:31:04.564738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.584 [2024-11-20 14:31:04.564793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:25.584 [2024-11-20 14:31:04.564897] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:25.584 [2024-11-20 14:31:04.564975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:25.843 [2024-11-20 14:31:04.565203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:25.843 spare 00:20:25.843 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.843 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:25.843 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.843 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:25.843 [2024-11-20 14:31:04.665348] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:25.843 [2024-11-20 14:31:04.665419] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:25.843 [2024-11-20 14:31:04.665602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:20:25.843 [2024-11-20 14:31:04.665829] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:25.843 [2024-11-20 14:31:04.665847] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:25.843 [2024-11-20 14:31:04.666085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.843 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.843 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:25.843 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.843 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:25.843 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:25.843 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:25.843 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:25.843 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.843 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.843 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.843 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.843 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.843 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.843 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.843 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:25.843 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.843 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.843 "name": "raid_bdev1", 00:20:25.843 "uuid": "4572b4e4-768c-46b2-908e-8022d97e8212", 00:20:25.843 "strip_size_kb": 0, 00:20:25.843 "state": "online", 00:20:25.843 "raid_level": "raid1", 00:20:25.843 "superblock": true, 00:20:25.843 "num_base_bdevs": 2, 00:20:25.843 "num_base_bdevs_discovered": 2, 00:20:25.843 "num_base_bdevs_operational": 2, 00:20:25.843 "base_bdevs_list": [ 00:20:25.843 { 00:20:25.843 "name": "spare", 00:20:25.843 "uuid": "5de94daf-d14f-51a2-a7f3-d850d56be135", 00:20:25.843 "is_configured": true, 00:20:25.843 "data_offset": 256, 00:20:25.843 "data_size": 7936 00:20:25.843 }, 00:20:25.843 { 00:20:25.843 "name": "BaseBdev2", 00:20:25.843 "uuid": "cd60013f-a531-5340-bdff-c8f5c2a0cf8c", 00:20:25.843 "is_configured": true, 00:20:25.843 "data_offset": 256, 00:20:25.843 "data_size": 7936 00:20:25.843 } 00:20:25.843 ] 00:20:25.843 }' 00:20:25.843 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.843 14:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.409 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:26.409 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:26.409 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:26.409 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:26.409 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:26.409 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.409 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.409 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.409 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.409 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.409 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:26.409 "name": "raid_bdev1", 00:20:26.409 "uuid": "4572b4e4-768c-46b2-908e-8022d97e8212", 00:20:26.409 "strip_size_kb": 0, 00:20:26.409 "state": "online", 00:20:26.409 "raid_level": "raid1", 00:20:26.409 "superblock": true, 00:20:26.409 "num_base_bdevs": 2, 00:20:26.409 "num_base_bdevs_discovered": 2, 00:20:26.410 "num_base_bdevs_operational": 2, 00:20:26.410 "base_bdevs_list": [ 00:20:26.410 { 00:20:26.410 "name": "spare", 00:20:26.410 "uuid": "5de94daf-d14f-51a2-a7f3-d850d56be135", 00:20:26.410 "is_configured": true, 00:20:26.410 "data_offset": 256, 00:20:26.410 "data_size": 7936 00:20:26.410 }, 00:20:26.410 { 00:20:26.410 "name": "BaseBdev2", 00:20:26.410 "uuid": "cd60013f-a531-5340-bdff-c8f5c2a0cf8c", 00:20:26.410 "is_configured": true, 00:20:26.410 "data_offset": 256, 00:20:26.410 "data_size": 7936 00:20:26.410 } 00:20:26.410 ] 00:20:26.410 }' 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.410 [2024-11-20 14:31:05.338320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.410 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.669 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:26.669 "name": "raid_bdev1", 00:20:26.669 "uuid": "4572b4e4-768c-46b2-908e-8022d97e8212", 00:20:26.669 "strip_size_kb": 0, 00:20:26.669 "state": "online", 00:20:26.669 "raid_level": "raid1", 00:20:26.669 "superblock": true, 00:20:26.669 "num_base_bdevs": 2, 00:20:26.669 "num_base_bdevs_discovered": 1, 00:20:26.669 "num_base_bdevs_operational": 1, 00:20:26.669 "base_bdevs_list": [ 00:20:26.669 { 00:20:26.669 "name": null, 00:20:26.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.669 "is_configured": false, 00:20:26.669 "data_offset": 0, 00:20:26.669 "data_size": 7936 00:20:26.669 }, 00:20:26.669 { 00:20:26.669 "name": "BaseBdev2", 00:20:26.669 "uuid": "cd60013f-a531-5340-bdff-c8f5c2a0cf8c", 00:20:26.669 "is_configured": true, 00:20:26.669 "data_offset": 256, 00:20:26.669 "data_size": 7936 00:20:26.669 } 00:20:26.669 ] 00:20:26.669 }' 00:20:26.669 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:26.669 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.928 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:26.928 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.928 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.928 [2024-11-20 14:31:05.854509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:26.928 [2024-11-20 14:31:05.854769] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:26.928 [2024-11-20 14:31:05.854800] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:26.928 [2024-11-20 14:31:05.854852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:26.928 [2024-11-20 14:31:05.867683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:20:26.928 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.928 14:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:26.928 [2024-11-20 14:31:05.870349] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:28.304 14:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:28.304 14:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:28.304 14:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:28.304 14:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:28.304 14:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:28.304 14:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.304 14:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.304 14:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:28.304 14:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.304 14:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.304 14:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:28.304 "name": "raid_bdev1", 00:20:28.304 "uuid": "4572b4e4-768c-46b2-908e-8022d97e8212", 00:20:28.304 "strip_size_kb": 0, 00:20:28.304 "state": "online", 00:20:28.304 "raid_level": "raid1", 00:20:28.304 "superblock": true, 00:20:28.304 "num_base_bdevs": 2, 00:20:28.304 "num_base_bdevs_discovered": 2, 00:20:28.304 "num_base_bdevs_operational": 2, 00:20:28.304 "process": { 00:20:28.304 "type": "rebuild", 00:20:28.304 "target": "spare", 00:20:28.304 "progress": { 00:20:28.304 "blocks": 2560, 00:20:28.304 "percent": 32 00:20:28.304 } 00:20:28.304 }, 00:20:28.304 "base_bdevs_list": [ 00:20:28.304 { 00:20:28.304 "name": "spare", 00:20:28.304 "uuid": "5de94daf-d14f-51a2-a7f3-d850d56be135", 00:20:28.304 "is_configured": true, 00:20:28.304 "data_offset": 256, 00:20:28.304 "data_size": 7936 00:20:28.304 }, 00:20:28.304 { 00:20:28.304 "name": "BaseBdev2", 00:20:28.304 "uuid": "cd60013f-a531-5340-bdff-c8f5c2a0cf8c", 00:20:28.304 "is_configured": true, 00:20:28.304 "data_offset": 256, 00:20:28.304 "data_size": 7936 00:20:28.304 } 00:20:28.304 ] 00:20:28.304 }' 00:20:28.304 14:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:28.304 14:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:28.304 14:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:28.304 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:28.304 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:28.304 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.304 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:28.304 [2024-11-20 14:31:07.036507] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:28.304 [2024-11-20 14:31:07.079930] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:28.304 [2024-11-20 14:31:07.080074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.304 [2024-11-20 14:31:07.080101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:28.304 [2024-11-20 14:31:07.080144] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:28.304 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.304 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:28.304 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:28.304 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:28.304 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:28.304 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:28.304 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:28.304 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.304 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.304 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.304 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.304 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.304 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.305 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.305 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:28.305 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.305 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.305 "name": "raid_bdev1", 00:20:28.305 "uuid": "4572b4e4-768c-46b2-908e-8022d97e8212", 00:20:28.305 "strip_size_kb": 0, 00:20:28.305 "state": "online", 00:20:28.305 "raid_level": "raid1", 00:20:28.305 "superblock": true, 00:20:28.305 "num_base_bdevs": 2, 00:20:28.305 "num_base_bdevs_discovered": 1, 00:20:28.305 "num_base_bdevs_operational": 1, 00:20:28.305 "base_bdevs_list": [ 00:20:28.305 { 00:20:28.305 "name": null, 00:20:28.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.305 "is_configured": false, 00:20:28.305 "data_offset": 0, 00:20:28.305 "data_size": 7936 00:20:28.305 }, 00:20:28.305 { 00:20:28.305 "name": "BaseBdev2", 00:20:28.305 "uuid": "cd60013f-a531-5340-bdff-c8f5c2a0cf8c", 00:20:28.305 "is_configured": true, 00:20:28.305 "data_offset": 256, 00:20:28.305 "data_size": 7936 00:20:28.305 } 00:20:28.305 ] 00:20:28.305 }' 00:20:28.305 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.305 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:28.871 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:28.871 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.871 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:28.871 [2024-11-20 14:31:07.630573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:28.871 [2024-11-20 14:31:07.630661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.871 [2024-11-20 14:31:07.630698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:28.871 [2024-11-20 14:31:07.630718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.871 [2024-11-20 14:31:07.631083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.871 [2024-11-20 14:31:07.631117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:28.871 [2024-11-20 14:31:07.631201] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:28.871 [2024-11-20 14:31:07.631226] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:28.871 [2024-11-20 14:31:07.631240] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:28.871 [2024-11-20 14:31:07.631283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:28.871 [2024-11-20 14:31:07.645151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:20:28.871 spare 00:20:28.871 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.871 14:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:28.871 [2024-11-20 14:31:07.647864] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:29.805 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:29.805 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:29.805 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:29.805 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:29.805 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:29.805 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.805 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.805 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.805 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:29.805 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.805 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:29.805 "name": "raid_bdev1", 00:20:29.805 "uuid": "4572b4e4-768c-46b2-908e-8022d97e8212", 00:20:29.805 "strip_size_kb": 0, 00:20:29.805 "state": "online", 00:20:29.805 "raid_level": "raid1", 00:20:29.805 "superblock": true, 00:20:29.805 "num_base_bdevs": 2, 00:20:29.805 "num_base_bdevs_discovered": 2, 00:20:29.805 "num_base_bdevs_operational": 2, 00:20:29.805 "process": { 00:20:29.805 "type": "rebuild", 00:20:29.805 "target": "spare", 00:20:29.805 "progress": { 00:20:29.805 "blocks": 2560, 00:20:29.805 "percent": 32 00:20:29.805 } 00:20:29.805 }, 00:20:29.805 "base_bdevs_list": [ 00:20:29.805 { 00:20:29.805 "name": "spare", 00:20:29.805 "uuid": "5de94daf-d14f-51a2-a7f3-d850d56be135", 00:20:29.805 "is_configured": true, 00:20:29.805 "data_offset": 256, 00:20:29.805 "data_size": 7936 00:20:29.805 }, 00:20:29.805 { 00:20:29.805 "name": "BaseBdev2", 00:20:29.805 "uuid": "cd60013f-a531-5340-bdff-c8f5c2a0cf8c", 00:20:29.805 "is_configured": true, 00:20:29.805 "data_offset": 256, 00:20:29.805 "data_size": 7936 00:20:29.805 } 00:20:29.805 ] 00:20:29.805 }' 00:20:29.805 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:29.805 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:29.805 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:30.064 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:30.064 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:30.064 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.064 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.064 [2024-11-20 14:31:08.845350] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:30.064 [2024-11-20 14:31:08.857296] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:30.064 [2024-11-20 14:31:08.857646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.064 [2024-11-20 14:31:08.857789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:30.064 [2024-11-20 14:31:08.857842] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:30.064 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.064 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:30.064 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:30.064 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:30.064 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:30.064 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:30.064 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:30.064 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.064 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.064 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.064 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.064 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.064 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.064 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.064 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.064 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.064 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.064 "name": "raid_bdev1", 00:20:30.064 "uuid": "4572b4e4-768c-46b2-908e-8022d97e8212", 00:20:30.064 "strip_size_kb": 0, 00:20:30.064 "state": "online", 00:20:30.064 "raid_level": "raid1", 00:20:30.064 "superblock": true, 00:20:30.064 "num_base_bdevs": 2, 00:20:30.064 "num_base_bdevs_discovered": 1, 00:20:30.064 "num_base_bdevs_operational": 1, 00:20:30.064 "base_bdevs_list": [ 00:20:30.064 { 00:20:30.064 "name": null, 00:20:30.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.064 "is_configured": false, 00:20:30.064 "data_offset": 0, 00:20:30.064 "data_size": 7936 00:20:30.064 }, 00:20:30.064 { 00:20:30.064 "name": "BaseBdev2", 00:20:30.064 "uuid": "cd60013f-a531-5340-bdff-c8f5c2a0cf8c", 00:20:30.064 "is_configured": true, 00:20:30.064 "data_offset": 256, 00:20:30.064 "data_size": 7936 00:20:30.064 } 00:20:30.064 ] 00:20:30.064 }' 00:20:30.064 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.064 14:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.649 14:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:30.649 14:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:30.649 14:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:30.649 14:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:30.649 14:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:30.649 14:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.649 14:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.649 14:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.649 14:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.649 14:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.650 14:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:30.650 "name": "raid_bdev1", 00:20:30.650 "uuid": "4572b4e4-768c-46b2-908e-8022d97e8212", 00:20:30.650 "strip_size_kb": 0, 00:20:30.650 "state": "online", 00:20:30.650 "raid_level": "raid1", 00:20:30.650 "superblock": true, 00:20:30.650 "num_base_bdevs": 2, 00:20:30.650 "num_base_bdevs_discovered": 1, 00:20:30.650 "num_base_bdevs_operational": 1, 00:20:30.650 "base_bdevs_list": [ 00:20:30.650 { 00:20:30.650 "name": null, 00:20:30.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.650 "is_configured": false, 00:20:30.650 "data_offset": 0, 00:20:30.650 "data_size": 7936 00:20:30.650 }, 00:20:30.650 { 00:20:30.650 "name": "BaseBdev2", 00:20:30.650 "uuid": "cd60013f-a531-5340-bdff-c8f5c2a0cf8c", 00:20:30.650 "is_configured": true, 00:20:30.650 "data_offset": 256, 00:20:30.650 "data_size": 7936 00:20:30.650 } 00:20:30.650 ] 00:20:30.650 }' 00:20:30.650 14:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:30.650 14:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:30.650 14:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:30.650 14:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:30.650 14:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:30.650 14:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.650 14:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.650 14:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.650 14:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:30.650 14:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.650 14:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.650 [2024-11-20 14:31:09.572546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:30.650 [2024-11-20 14:31:09.572624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.650 [2024-11-20 14:31:09.572661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:30.650 [2024-11-20 14:31:09.572677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.650 [2024-11-20 14:31:09.572965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.650 [2024-11-20 14:31:09.573015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:30.650 [2024-11-20 14:31:09.573094] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:30.650 [2024-11-20 14:31:09.573116] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:30.650 [2024-11-20 14:31:09.573135] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:30.650 [2024-11-20 14:31:09.573149] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:30.650 BaseBdev1 00:20:30.650 14:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.650 14:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:32.025 14:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:32.025 14:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:32.025 14:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:32.025 14:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:32.025 14:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:32.025 14:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:32.025 14:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.025 14:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.025 14:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.025 14:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.025 14:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.025 14:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.025 14:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.025 14:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.025 14:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.025 14:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.025 "name": "raid_bdev1", 00:20:32.025 "uuid": "4572b4e4-768c-46b2-908e-8022d97e8212", 00:20:32.025 "strip_size_kb": 0, 00:20:32.025 "state": "online", 00:20:32.025 "raid_level": "raid1", 00:20:32.025 "superblock": true, 00:20:32.025 "num_base_bdevs": 2, 00:20:32.025 "num_base_bdevs_discovered": 1, 00:20:32.025 "num_base_bdevs_operational": 1, 00:20:32.025 "base_bdevs_list": [ 00:20:32.025 { 00:20:32.025 "name": null, 00:20:32.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.025 "is_configured": false, 00:20:32.025 "data_offset": 0, 00:20:32.025 "data_size": 7936 00:20:32.025 }, 00:20:32.025 { 00:20:32.025 "name": "BaseBdev2", 00:20:32.025 "uuid": "cd60013f-a531-5340-bdff-c8f5c2a0cf8c", 00:20:32.025 "is_configured": true, 00:20:32.025 "data_offset": 256, 00:20:32.025 "data_size": 7936 00:20:32.025 } 00:20:32.025 ] 00:20:32.025 }' 00:20:32.025 14:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.025 14:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.284 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:32.284 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.284 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.285 "name": "raid_bdev1", 00:20:32.285 "uuid": "4572b4e4-768c-46b2-908e-8022d97e8212", 00:20:32.285 "strip_size_kb": 0, 00:20:32.285 "state": "online", 00:20:32.285 "raid_level": "raid1", 00:20:32.285 "superblock": true, 00:20:32.285 "num_base_bdevs": 2, 00:20:32.285 "num_base_bdevs_discovered": 1, 00:20:32.285 "num_base_bdevs_operational": 1, 00:20:32.285 "base_bdevs_list": [ 00:20:32.285 { 00:20:32.285 "name": null, 00:20:32.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.285 "is_configured": false, 00:20:32.285 "data_offset": 0, 00:20:32.285 "data_size": 7936 00:20:32.285 }, 00:20:32.285 { 00:20:32.285 "name": "BaseBdev2", 00:20:32.285 "uuid": "cd60013f-a531-5340-bdff-c8f5c2a0cf8c", 00:20:32.285 "is_configured": true, 00:20:32.285 "data_offset": 256, 00:20:32.285 "data_size": 7936 00:20:32.285 } 00:20:32.285 ] 00:20:32.285 }' 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.285 [2024-11-20 14:31:11.217031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:32.285 [2024-11-20 14:31:11.217233] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:32.285 [2024-11-20 14:31:11.217259] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:32.285 request: 00:20:32.285 { 00:20:32.285 "base_bdev": "BaseBdev1", 00:20:32.285 "raid_bdev": "raid_bdev1", 00:20:32.285 "method": "bdev_raid_add_base_bdev", 00:20:32.285 "req_id": 1 00:20:32.285 } 00:20:32.285 Got JSON-RPC error response 00:20:32.285 response: 00:20:32.285 { 00:20:32.285 "code": -22, 00:20:32.285 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:32.285 } 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:32.285 14:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:33.662 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:33.662 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:33.662 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:33.662 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:33.662 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:33.662 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:33.662 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.662 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.662 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.662 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.662 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.662 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.662 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.662 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.662 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.662 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.662 "name": "raid_bdev1", 00:20:33.662 "uuid": "4572b4e4-768c-46b2-908e-8022d97e8212", 00:20:33.662 "strip_size_kb": 0, 00:20:33.662 "state": "online", 00:20:33.662 "raid_level": "raid1", 00:20:33.662 "superblock": true, 00:20:33.662 "num_base_bdevs": 2, 00:20:33.662 "num_base_bdevs_discovered": 1, 00:20:33.662 "num_base_bdevs_operational": 1, 00:20:33.662 "base_bdevs_list": [ 00:20:33.662 { 00:20:33.662 "name": null, 00:20:33.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.662 "is_configured": false, 00:20:33.662 "data_offset": 0, 00:20:33.662 "data_size": 7936 00:20:33.662 }, 00:20:33.662 { 00:20:33.662 "name": "BaseBdev2", 00:20:33.662 "uuid": "cd60013f-a531-5340-bdff-c8f5c2a0cf8c", 00:20:33.662 "is_configured": true, 00:20:33.662 "data_offset": 256, 00:20:33.662 "data_size": 7936 00:20:33.662 } 00:20:33.662 ] 00:20:33.662 }' 00:20:33.662 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.662 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:33.921 "name": "raid_bdev1", 00:20:33.921 "uuid": "4572b4e4-768c-46b2-908e-8022d97e8212", 00:20:33.921 "strip_size_kb": 0, 00:20:33.921 "state": "online", 00:20:33.921 "raid_level": "raid1", 00:20:33.921 "superblock": true, 00:20:33.921 "num_base_bdevs": 2, 00:20:33.921 "num_base_bdevs_discovered": 1, 00:20:33.921 "num_base_bdevs_operational": 1, 00:20:33.921 "base_bdevs_list": [ 00:20:33.921 { 00:20:33.921 "name": null, 00:20:33.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.921 "is_configured": false, 00:20:33.921 "data_offset": 0, 00:20:33.921 "data_size": 7936 00:20:33.921 }, 00:20:33.921 { 00:20:33.921 "name": "BaseBdev2", 00:20:33.921 "uuid": "cd60013f-a531-5340-bdff-c8f5c2a0cf8c", 00:20:33.921 "is_configured": true, 00:20:33.921 "data_offset": 256, 00:20:33.921 "data_size": 7936 00:20:33.921 } 00:20:33.921 ] 00:20:33.921 }' 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88257 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88257 ']' 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88257 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88257 00:20:33.921 killing process with pid 88257 00:20:33.921 Received shutdown signal, test time was about 60.000000 seconds 00:20:33.921 00:20:33.921 Latency(us) 00:20:33.921 [2024-11-20T14:31:12.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.921 [2024-11-20T14:31:12.903Z] =================================================================================================================== 00:20:33.921 [2024-11-20T14:31:12.903Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88257' 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88257 00:20:33.921 [2024-11-20 14:31:12.882814] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:33.921 14:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88257 00:20:33.922 [2024-11-20 14:31:12.882967] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:33.922 [2024-11-20 14:31:12.883050] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:33.922 [2024-11-20 14:31:12.883072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:34.488 [2024-11-20 14:31:13.174237] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:35.426 14:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:20:35.426 00:20:35.426 real 0m21.459s 00:20:35.426 user 0m29.083s 00:20:35.426 sys 0m2.505s 00:20:35.426 14:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:35.426 14:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.426 ************************************ 00:20:35.426 END TEST raid_rebuild_test_sb_md_separate 00:20:35.426 ************************************ 00:20:35.426 14:31:14 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:20:35.426 14:31:14 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:20:35.426 14:31:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:35.426 14:31:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:35.426 14:31:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:35.426 ************************************ 00:20:35.426 START TEST raid_state_function_test_sb_md_interleaved 00:20:35.426 ************************************ 00:20:35.426 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:20:35.426 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:35.426 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:35.426 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:35.426 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:35.426 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:35.426 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:35.426 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:35.427 Process raid pid: 88958 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88958 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88958' 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88958 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88958 ']' 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.427 14:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.427 [2024-11-20 14:31:14.372644] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:20:35.427 [2024-11-20 14:31:14.372803] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.685 [2024-11-20 14:31:14.549340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.945 [2024-11-20 14:31:14.681435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.945 [2024-11-20 14:31:14.890783] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:35.945 [2024-11-20 14:31:14.890841] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:36.518 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.518 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:36.518 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:36.518 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.518 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:36.518 [2024-11-20 14:31:15.391260] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:36.518 [2024-11-20 14:31:15.391329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:36.518 [2024-11-20 14:31:15.391347] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:36.518 [2024-11-20 14:31:15.391364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:36.518 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.518 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:36.518 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:36.518 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:36.518 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:36.518 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:36.518 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:36.518 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.518 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.518 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.518 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.518 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.518 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.518 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:36.518 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:36.518 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.518 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.518 "name": "Existed_Raid", 00:20:36.518 "uuid": "5893ddd9-f663-406d-b413-305fe54a6b9e", 00:20:36.518 "strip_size_kb": 0, 00:20:36.518 "state": "configuring", 00:20:36.518 "raid_level": "raid1", 00:20:36.518 "superblock": true, 00:20:36.518 "num_base_bdevs": 2, 00:20:36.518 "num_base_bdevs_discovered": 0, 00:20:36.518 "num_base_bdevs_operational": 2, 00:20:36.518 "base_bdevs_list": [ 00:20:36.518 { 00:20:36.518 "name": "BaseBdev1", 00:20:36.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.518 "is_configured": false, 00:20:36.518 "data_offset": 0, 00:20:36.518 "data_size": 0 00:20:36.518 }, 00:20:36.518 { 00:20:36.518 "name": "BaseBdev2", 00:20:36.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.518 "is_configured": false, 00:20:36.518 "data_offset": 0, 00:20:36.518 "data_size": 0 00:20:36.518 } 00:20:36.518 ] 00:20:36.518 }' 00:20:36.518 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.518 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.101 [2024-11-20 14:31:15.927331] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:37.101 [2024-11-20 14:31:15.927528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.101 [2024-11-20 14:31:15.935321] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:37.101 [2024-11-20 14:31:15.935377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:37.101 [2024-11-20 14:31:15.935403] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:37.101 [2024-11-20 14:31:15.935423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.101 [2024-11-20 14:31:15.980320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:37.101 BaseBdev1 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.101 14:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.101 [ 00:20:37.101 { 00:20:37.101 "name": "BaseBdev1", 00:20:37.101 "aliases": [ 00:20:37.101 "a76a44ec-877a-44be-9bfb-5ce85c09d3b7" 00:20:37.101 ], 00:20:37.101 "product_name": "Malloc disk", 00:20:37.101 "block_size": 4128, 00:20:37.101 "num_blocks": 8192, 00:20:37.101 "uuid": "a76a44ec-877a-44be-9bfb-5ce85c09d3b7", 00:20:37.101 "md_size": 32, 00:20:37.101 "md_interleave": true, 00:20:37.101 "dif_type": 0, 00:20:37.101 "assigned_rate_limits": { 00:20:37.101 "rw_ios_per_sec": 0, 00:20:37.101 "rw_mbytes_per_sec": 0, 00:20:37.101 "r_mbytes_per_sec": 0, 00:20:37.101 "w_mbytes_per_sec": 0 00:20:37.101 }, 00:20:37.101 "claimed": true, 00:20:37.101 "claim_type": "exclusive_write", 00:20:37.101 "zoned": false, 00:20:37.101 "supported_io_types": { 00:20:37.102 "read": true, 00:20:37.102 "write": true, 00:20:37.102 "unmap": true, 00:20:37.102 "flush": true, 00:20:37.102 "reset": true, 00:20:37.102 "nvme_admin": false, 00:20:37.102 "nvme_io": false, 00:20:37.102 "nvme_io_md": false, 00:20:37.102 "write_zeroes": true, 00:20:37.102 "zcopy": true, 00:20:37.102 "get_zone_info": false, 00:20:37.102 "zone_management": false, 00:20:37.102 "zone_append": false, 00:20:37.102 "compare": false, 00:20:37.102 "compare_and_write": false, 00:20:37.102 "abort": true, 00:20:37.102 "seek_hole": false, 00:20:37.102 "seek_data": false, 00:20:37.102 "copy": true, 00:20:37.102 "nvme_iov_md": false 00:20:37.102 }, 00:20:37.102 "memory_domains": [ 00:20:37.102 { 00:20:37.102 "dma_device_id": "system", 00:20:37.102 "dma_device_type": 1 00:20:37.102 }, 00:20:37.102 { 00:20:37.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.102 "dma_device_type": 2 00:20:37.102 } 00:20:37.102 ], 00:20:37.102 "driver_specific": {} 00:20:37.102 } 00:20:37.102 ] 00:20:37.102 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.102 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:20:37.102 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:37.102 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:37.102 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:37.102 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:37.102 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:37.102 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:37.102 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.102 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.102 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.102 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.102 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.102 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.102 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.102 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.102 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.102 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.102 "name": "Existed_Raid", 00:20:37.102 "uuid": "411d838c-ebe8-45a4-b50e-bb12a7864d07", 00:20:37.102 "strip_size_kb": 0, 00:20:37.102 "state": "configuring", 00:20:37.102 "raid_level": "raid1", 00:20:37.102 "superblock": true, 00:20:37.102 "num_base_bdevs": 2, 00:20:37.102 "num_base_bdevs_discovered": 1, 00:20:37.102 "num_base_bdevs_operational": 2, 00:20:37.102 "base_bdevs_list": [ 00:20:37.102 { 00:20:37.102 "name": "BaseBdev1", 00:20:37.102 "uuid": "a76a44ec-877a-44be-9bfb-5ce85c09d3b7", 00:20:37.102 "is_configured": true, 00:20:37.102 "data_offset": 256, 00:20:37.102 "data_size": 7936 00:20:37.102 }, 00:20:37.102 { 00:20:37.102 "name": "BaseBdev2", 00:20:37.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.102 "is_configured": false, 00:20:37.102 "data_offset": 0, 00:20:37.102 "data_size": 0 00:20:37.102 } 00:20:37.102 ] 00:20:37.102 }' 00:20:37.102 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.102 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.695 [2024-11-20 14:31:16.520562] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:37.695 [2024-11-20 14:31:16.520625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.695 [2024-11-20 14:31:16.528605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:37.695 [2024-11-20 14:31:16.531152] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:37.695 [2024-11-20 14:31:16.531239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.695 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.695 "name": "Existed_Raid", 00:20:37.695 "uuid": "bf4ea61e-2c90-4b42-b86b-316c9f5a7134", 00:20:37.695 "strip_size_kb": 0, 00:20:37.695 "state": "configuring", 00:20:37.695 "raid_level": "raid1", 00:20:37.695 "superblock": true, 00:20:37.695 "num_base_bdevs": 2, 00:20:37.695 "num_base_bdevs_discovered": 1, 00:20:37.695 "num_base_bdevs_operational": 2, 00:20:37.695 "base_bdevs_list": [ 00:20:37.695 { 00:20:37.695 "name": "BaseBdev1", 00:20:37.695 "uuid": "a76a44ec-877a-44be-9bfb-5ce85c09d3b7", 00:20:37.695 "is_configured": true, 00:20:37.695 "data_offset": 256, 00:20:37.695 "data_size": 7936 00:20:37.695 }, 00:20:37.695 { 00:20:37.696 "name": "BaseBdev2", 00:20:37.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.696 "is_configured": false, 00:20:37.696 "data_offset": 0, 00:20:37.696 "data_size": 0 00:20:37.696 } 00:20:37.696 ] 00:20:37.696 }' 00:20:37.696 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.696 14:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:38.283 [2024-11-20 14:31:17.098928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:38.283 [2024-11-20 14:31:17.099473] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:38.283 [2024-11-20 14:31:17.099500] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:38.283 [2024-11-20 14:31:17.099608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:38.283 [2024-11-20 14:31:17.099710] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:38.283 [2024-11-20 14:31:17.099730] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:38.283 [2024-11-20 14:31:17.099814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:38.283 BaseBdev2 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:38.283 [ 00:20:38.283 { 00:20:38.283 "name": "BaseBdev2", 00:20:38.283 "aliases": [ 00:20:38.283 "346da6de-bdd0-42d5-9b32-bb7e1bb1f904" 00:20:38.283 ], 00:20:38.283 "product_name": "Malloc disk", 00:20:38.283 "block_size": 4128, 00:20:38.283 "num_blocks": 8192, 00:20:38.283 "uuid": "346da6de-bdd0-42d5-9b32-bb7e1bb1f904", 00:20:38.283 "md_size": 32, 00:20:38.283 "md_interleave": true, 00:20:38.283 "dif_type": 0, 00:20:38.283 "assigned_rate_limits": { 00:20:38.283 "rw_ios_per_sec": 0, 00:20:38.283 "rw_mbytes_per_sec": 0, 00:20:38.283 "r_mbytes_per_sec": 0, 00:20:38.283 "w_mbytes_per_sec": 0 00:20:38.283 }, 00:20:38.283 "claimed": true, 00:20:38.283 "claim_type": "exclusive_write", 00:20:38.283 "zoned": false, 00:20:38.283 "supported_io_types": { 00:20:38.283 "read": true, 00:20:38.283 "write": true, 00:20:38.283 "unmap": true, 00:20:38.283 "flush": true, 00:20:38.283 "reset": true, 00:20:38.283 "nvme_admin": false, 00:20:38.283 "nvme_io": false, 00:20:38.283 "nvme_io_md": false, 00:20:38.283 "write_zeroes": true, 00:20:38.283 "zcopy": true, 00:20:38.283 "get_zone_info": false, 00:20:38.283 "zone_management": false, 00:20:38.283 "zone_append": false, 00:20:38.283 "compare": false, 00:20:38.283 "compare_and_write": false, 00:20:38.283 "abort": true, 00:20:38.283 "seek_hole": false, 00:20:38.283 "seek_data": false, 00:20:38.283 "copy": true, 00:20:38.283 "nvme_iov_md": false 00:20:38.283 }, 00:20:38.283 "memory_domains": [ 00:20:38.283 { 00:20:38.283 "dma_device_id": "system", 00:20:38.283 "dma_device_type": 1 00:20:38.283 }, 00:20:38.283 { 00:20:38.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.283 "dma_device_type": 2 00:20:38.283 } 00:20:38.283 ], 00:20:38.283 "driver_specific": {} 00:20:38.283 } 00:20:38.283 ] 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.283 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.283 "name": "Existed_Raid", 00:20:38.283 "uuid": "bf4ea61e-2c90-4b42-b86b-316c9f5a7134", 00:20:38.283 "strip_size_kb": 0, 00:20:38.283 "state": "online", 00:20:38.283 "raid_level": "raid1", 00:20:38.283 "superblock": true, 00:20:38.283 "num_base_bdevs": 2, 00:20:38.283 "num_base_bdevs_discovered": 2, 00:20:38.283 "num_base_bdevs_operational": 2, 00:20:38.283 "base_bdevs_list": [ 00:20:38.283 { 00:20:38.283 "name": "BaseBdev1", 00:20:38.284 "uuid": "a76a44ec-877a-44be-9bfb-5ce85c09d3b7", 00:20:38.284 "is_configured": true, 00:20:38.284 "data_offset": 256, 00:20:38.284 "data_size": 7936 00:20:38.284 }, 00:20:38.284 { 00:20:38.284 "name": "BaseBdev2", 00:20:38.284 "uuid": "346da6de-bdd0-42d5-9b32-bb7e1bb1f904", 00:20:38.284 "is_configured": true, 00:20:38.284 "data_offset": 256, 00:20:38.284 "data_size": 7936 00:20:38.284 } 00:20:38.284 ] 00:20:38.284 }' 00:20:38.284 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.284 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:38.852 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:38.852 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:38.852 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:38.852 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:38.852 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:38.852 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:38.852 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:38.852 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:38.852 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.852 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:38.852 [2024-11-20 14:31:17.663527] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:38.852 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.852 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:38.852 "name": "Existed_Raid", 00:20:38.852 "aliases": [ 00:20:38.852 "bf4ea61e-2c90-4b42-b86b-316c9f5a7134" 00:20:38.852 ], 00:20:38.852 "product_name": "Raid Volume", 00:20:38.852 "block_size": 4128, 00:20:38.852 "num_blocks": 7936, 00:20:38.852 "uuid": "bf4ea61e-2c90-4b42-b86b-316c9f5a7134", 00:20:38.852 "md_size": 32, 00:20:38.852 "md_interleave": true, 00:20:38.852 "dif_type": 0, 00:20:38.852 "assigned_rate_limits": { 00:20:38.852 "rw_ios_per_sec": 0, 00:20:38.852 "rw_mbytes_per_sec": 0, 00:20:38.852 "r_mbytes_per_sec": 0, 00:20:38.852 "w_mbytes_per_sec": 0 00:20:38.852 }, 00:20:38.852 "claimed": false, 00:20:38.852 "zoned": false, 00:20:38.852 "supported_io_types": { 00:20:38.852 "read": true, 00:20:38.852 "write": true, 00:20:38.852 "unmap": false, 00:20:38.852 "flush": false, 00:20:38.852 "reset": true, 00:20:38.852 "nvme_admin": false, 00:20:38.852 "nvme_io": false, 00:20:38.852 "nvme_io_md": false, 00:20:38.852 "write_zeroes": true, 00:20:38.852 "zcopy": false, 00:20:38.852 "get_zone_info": false, 00:20:38.852 "zone_management": false, 00:20:38.852 "zone_append": false, 00:20:38.852 "compare": false, 00:20:38.852 "compare_and_write": false, 00:20:38.852 "abort": false, 00:20:38.852 "seek_hole": false, 00:20:38.852 "seek_data": false, 00:20:38.852 "copy": false, 00:20:38.852 "nvme_iov_md": false 00:20:38.852 }, 00:20:38.852 "memory_domains": [ 00:20:38.852 { 00:20:38.852 "dma_device_id": "system", 00:20:38.852 "dma_device_type": 1 00:20:38.852 }, 00:20:38.852 { 00:20:38.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.853 "dma_device_type": 2 00:20:38.853 }, 00:20:38.853 { 00:20:38.853 "dma_device_id": "system", 00:20:38.853 "dma_device_type": 1 00:20:38.853 }, 00:20:38.853 { 00:20:38.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.853 "dma_device_type": 2 00:20:38.853 } 00:20:38.853 ], 00:20:38.853 "driver_specific": { 00:20:38.853 "raid": { 00:20:38.853 "uuid": "bf4ea61e-2c90-4b42-b86b-316c9f5a7134", 00:20:38.853 "strip_size_kb": 0, 00:20:38.853 "state": "online", 00:20:38.853 "raid_level": "raid1", 00:20:38.853 "superblock": true, 00:20:38.853 "num_base_bdevs": 2, 00:20:38.853 "num_base_bdevs_discovered": 2, 00:20:38.853 "num_base_bdevs_operational": 2, 00:20:38.853 "base_bdevs_list": [ 00:20:38.853 { 00:20:38.853 "name": "BaseBdev1", 00:20:38.853 "uuid": "a76a44ec-877a-44be-9bfb-5ce85c09d3b7", 00:20:38.853 "is_configured": true, 00:20:38.853 "data_offset": 256, 00:20:38.853 "data_size": 7936 00:20:38.853 }, 00:20:38.853 { 00:20:38.853 "name": "BaseBdev2", 00:20:38.853 "uuid": "346da6de-bdd0-42d5-9b32-bb7e1bb1f904", 00:20:38.853 "is_configured": true, 00:20:38.853 "data_offset": 256, 00:20:38.853 "data_size": 7936 00:20:38.853 } 00:20:38.853 ] 00:20:38.853 } 00:20:38.853 } 00:20:38.853 }' 00:20:38.853 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:38.853 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:38.853 BaseBdev2' 00:20:38.853 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.853 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:38.853 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:38.853 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:38.853 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.853 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.853 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:38.853 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:39.113 [2024-11-20 14:31:17.907276] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:39.113 14:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:39.113 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.113 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.113 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:39.113 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:39.113 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.113 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:39.113 "name": "Existed_Raid", 00:20:39.113 "uuid": "bf4ea61e-2c90-4b42-b86b-316c9f5a7134", 00:20:39.113 "strip_size_kb": 0, 00:20:39.113 "state": "online", 00:20:39.113 "raid_level": "raid1", 00:20:39.113 "superblock": true, 00:20:39.113 "num_base_bdevs": 2, 00:20:39.113 "num_base_bdevs_discovered": 1, 00:20:39.113 "num_base_bdevs_operational": 1, 00:20:39.113 "base_bdevs_list": [ 00:20:39.113 { 00:20:39.113 "name": null, 00:20:39.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.113 "is_configured": false, 00:20:39.113 "data_offset": 0, 00:20:39.113 "data_size": 7936 00:20:39.113 }, 00:20:39.113 { 00:20:39.113 "name": "BaseBdev2", 00:20:39.113 "uuid": "346da6de-bdd0-42d5-9b32-bb7e1bb1f904", 00:20:39.113 "is_configured": true, 00:20:39.113 "data_offset": 256, 00:20:39.113 "data_size": 7936 00:20:39.113 } 00:20:39.113 ] 00:20:39.113 }' 00:20:39.113 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:39.113 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:39.682 [2024-11-20 14:31:18.514723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:39.682 [2024-11-20 14:31:18.514867] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:39.682 [2024-11-20 14:31:18.602662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:39.682 [2024-11-20 14:31:18.602739] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:39.682 [2024-11-20 14:31:18.602761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88958 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88958 ']' 00:20:39.682 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88958 00:20:39.683 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:39.941 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.941 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88958 00:20:39.941 killing process with pid 88958 00:20:39.941 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:39.941 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:39.941 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88958' 00:20:39.941 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88958 00:20:39.941 [2024-11-20 14:31:18.691872] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:39.941 14:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88958 00:20:39.941 [2024-11-20 14:31:18.706721] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:40.878 14:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:20:40.878 00:20:40.878 real 0m5.481s 00:20:40.878 user 0m8.263s 00:20:40.878 sys 0m0.801s 00:20:40.878 14:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:40.878 ************************************ 00:20:40.878 END TEST raid_state_function_test_sb_md_interleaved 00:20:40.878 ************************************ 00:20:40.878 14:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:40.878 14:31:19 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:20:40.878 14:31:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:40.878 14:31:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.878 14:31:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:40.878 ************************************ 00:20:40.878 START TEST raid_superblock_test_md_interleaved 00:20:40.878 ************************************ 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89206 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89206 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:40.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89206 ']' 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.878 14:31:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:41.136 [2024-11-20 14:31:19.918325] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:20:41.136 [2024-11-20 14:31:19.918777] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89206 ] 00:20:41.136 [2024-11-20 14:31:20.101118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.393 [2024-11-20 14:31:20.229851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.652 [2024-11-20 14:31:20.434014] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:41.652 [2024-11-20 14:31:20.434124] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:41.911 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.911 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:41.911 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:41.911 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:41.911 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:41.911 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:41.911 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:41.911 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:41.911 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:41.911 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:41.911 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:20:41.911 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.911 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:42.171 malloc1 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:42.171 [2024-11-20 14:31:20.933452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:42.171 [2024-11-20 14:31:20.933665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.171 [2024-11-20 14:31:20.933744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:42.171 [2024-11-20 14:31:20.933776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.171 [2024-11-20 14:31:20.936341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.171 [2024-11-20 14:31:20.936387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:42.171 pt1 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:42.171 malloc2 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:42.171 [2024-11-20 14:31:20.985471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:42.171 [2024-11-20 14:31:20.985548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.171 [2024-11-20 14:31:20.985582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:42.171 [2024-11-20 14:31:20.985597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.171 [2024-11-20 14:31:20.988124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.171 [2024-11-20 14:31:20.988323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:42.171 pt2 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:42.171 [2024-11-20 14:31:20.993498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:42.171 [2024-11-20 14:31:20.995952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:42.171 [2024-11-20 14:31:20.996445] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:42.171 [2024-11-20 14:31:20.996473] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:42.171 [2024-11-20 14:31:20.996578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:42.171 [2024-11-20 14:31:20.996683] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:42.171 [2024-11-20 14:31:20.996704] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:42.171 [2024-11-20 14:31:20.996799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.171 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.172 14:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.172 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.172 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.172 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.172 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:42.172 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.172 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.172 "name": "raid_bdev1", 00:20:42.172 "uuid": "870abaa7-e871-4fdf-9322-21b4eed9adaf", 00:20:42.172 "strip_size_kb": 0, 00:20:42.172 "state": "online", 00:20:42.172 "raid_level": "raid1", 00:20:42.172 "superblock": true, 00:20:42.172 "num_base_bdevs": 2, 00:20:42.172 "num_base_bdevs_discovered": 2, 00:20:42.172 "num_base_bdevs_operational": 2, 00:20:42.172 "base_bdevs_list": [ 00:20:42.172 { 00:20:42.172 "name": "pt1", 00:20:42.172 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:42.172 "is_configured": true, 00:20:42.172 "data_offset": 256, 00:20:42.172 "data_size": 7936 00:20:42.172 }, 00:20:42.172 { 00:20:42.172 "name": "pt2", 00:20:42.172 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:42.172 "is_configured": true, 00:20:42.172 "data_offset": 256, 00:20:42.172 "data_size": 7936 00:20:42.172 } 00:20:42.172 ] 00:20:42.172 }' 00:20:42.172 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.172 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:42.745 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:42.745 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:42.745 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:42.745 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:42.745 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:42.745 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:42.745 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:42.745 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:42.745 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.745 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:42.745 [2024-11-20 14:31:21.538181] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:42.745 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.745 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:42.745 "name": "raid_bdev1", 00:20:42.745 "aliases": [ 00:20:42.745 "870abaa7-e871-4fdf-9322-21b4eed9adaf" 00:20:42.745 ], 00:20:42.745 "product_name": "Raid Volume", 00:20:42.745 "block_size": 4128, 00:20:42.745 "num_blocks": 7936, 00:20:42.745 "uuid": "870abaa7-e871-4fdf-9322-21b4eed9adaf", 00:20:42.745 "md_size": 32, 00:20:42.745 "md_interleave": true, 00:20:42.745 "dif_type": 0, 00:20:42.745 "assigned_rate_limits": { 00:20:42.745 "rw_ios_per_sec": 0, 00:20:42.745 "rw_mbytes_per_sec": 0, 00:20:42.745 "r_mbytes_per_sec": 0, 00:20:42.745 "w_mbytes_per_sec": 0 00:20:42.745 }, 00:20:42.745 "claimed": false, 00:20:42.745 "zoned": false, 00:20:42.745 "supported_io_types": { 00:20:42.745 "read": true, 00:20:42.745 "write": true, 00:20:42.745 "unmap": false, 00:20:42.745 "flush": false, 00:20:42.745 "reset": true, 00:20:42.745 "nvme_admin": false, 00:20:42.745 "nvme_io": false, 00:20:42.745 "nvme_io_md": false, 00:20:42.745 "write_zeroes": true, 00:20:42.745 "zcopy": false, 00:20:42.745 "get_zone_info": false, 00:20:42.745 "zone_management": false, 00:20:42.745 "zone_append": false, 00:20:42.745 "compare": false, 00:20:42.745 "compare_and_write": false, 00:20:42.745 "abort": false, 00:20:42.745 "seek_hole": false, 00:20:42.745 "seek_data": false, 00:20:42.745 "copy": false, 00:20:42.745 "nvme_iov_md": false 00:20:42.745 }, 00:20:42.745 "memory_domains": [ 00:20:42.745 { 00:20:42.745 "dma_device_id": "system", 00:20:42.745 "dma_device_type": 1 00:20:42.745 }, 00:20:42.745 { 00:20:42.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.745 "dma_device_type": 2 00:20:42.745 }, 00:20:42.745 { 00:20:42.745 "dma_device_id": "system", 00:20:42.745 "dma_device_type": 1 00:20:42.745 }, 00:20:42.745 { 00:20:42.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.745 "dma_device_type": 2 00:20:42.745 } 00:20:42.745 ], 00:20:42.745 "driver_specific": { 00:20:42.745 "raid": { 00:20:42.745 "uuid": "870abaa7-e871-4fdf-9322-21b4eed9adaf", 00:20:42.745 "strip_size_kb": 0, 00:20:42.745 "state": "online", 00:20:42.745 "raid_level": "raid1", 00:20:42.745 "superblock": true, 00:20:42.745 "num_base_bdevs": 2, 00:20:42.745 "num_base_bdevs_discovered": 2, 00:20:42.745 "num_base_bdevs_operational": 2, 00:20:42.745 "base_bdevs_list": [ 00:20:42.745 { 00:20:42.745 "name": "pt1", 00:20:42.745 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:42.745 "is_configured": true, 00:20:42.745 "data_offset": 256, 00:20:42.745 "data_size": 7936 00:20:42.745 }, 00:20:42.745 { 00:20:42.745 "name": "pt2", 00:20:42.745 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:42.745 "is_configured": true, 00:20:42.745 "data_offset": 256, 00:20:42.745 "data_size": 7936 00:20:42.745 } 00:20:42.745 ] 00:20:42.745 } 00:20:42.745 } 00:20:42.745 }' 00:20:42.745 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:42.745 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:42.745 pt2' 00:20:42.745 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:42.745 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:42.745 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:42.745 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:42.745 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:42.745 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.745 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:42.745 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.034 [2024-11-20 14:31:21.798058] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=870abaa7-e871-4fdf-9322-21b4eed9adaf 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 870abaa7-e871-4fdf-9322-21b4eed9adaf ']' 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.034 [2024-11-20 14:31:21.841677] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:43.034 [2024-11-20 14:31:21.841710] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:43.034 [2024-11-20 14:31:21.841826] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:43.034 [2024-11-20 14:31:21.841906] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:43.034 [2024-11-20 14:31:21.841926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.034 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.035 [2024-11-20 14:31:21.973757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:43.035 [2024-11-20 14:31:21.976392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:43.035 [2024-11-20 14:31:21.976495] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:43.035 [2024-11-20 14:31:21.976578] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:43.035 [2024-11-20 14:31:21.976604] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:43.035 [2024-11-20 14:31:21.976619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:43.035 request: 00:20:43.035 { 00:20:43.035 "name": "raid_bdev1", 00:20:43.035 "raid_level": "raid1", 00:20:43.035 "base_bdevs": [ 00:20:43.035 "malloc1", 00:20:43.035 "malloc2" 00:20:43.035 ], 00:20:43.035 "superblock": false, 00:20:43.035 "method": "bdev_raid_create", 00:20:43.035 "req_id": 1 00:20:43.035 } 00:20:43.035 Got JSON-RPC error response 00:20:43.035 response: 00:20:43.035 { 00:20:43.035 "code": -17, 00:20:43.035 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:43.035 } 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.035 14:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.294 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:43.294 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:43.294 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:43.294 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.294 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.294 [2024-11-20 14:31:22.037746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:43.294 [2024-11-20 14:31:22.037966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.294 [2024-11-20 14:31:22.038069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:43.294 [2024-11-20 14:31:22.038180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.294 [2024-11-20 14:31:22.040751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.294 [2024-11-20 14:31:22.040800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:43.294 [2024-11-20 14:31:22.040880] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:43.294 [2024-11-20 14:31:22.040957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:43.294 pt1 00:20:43.294 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.294 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:43.294 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:43.294 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:43.294 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:43.294 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:43.294 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:43.294 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.294 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.294 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.294 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.294 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.294 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.294 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.294 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.294 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.294 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.294 "name": "raid_bdev1", 00:20:43.294 "uuid": "870abaa7-e871-4fdf-9322-21b4eed9adaf", 00:20:43.294 "strip_size_kb": 0, 00:20:43.294 "state": "configuring", 00:20:43.294 "raid_level": "raid1", 00:20:43.294 "superblock": true, 00:20:43.294 "num_base_bdevs": 2, 00:20:43.294 "num_base_bdevs_discovered": 1, 00:20:43.294 "num_base_bdevs_operational": 2, 00:20:43.294 "base_bdevs_list": [ 00:20:43.294 { 00:20:43.294 "name": "pt1", 00:20:43.294 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:43.294 "is_configured": true, 00:20:43.294 "data_offset": 256, 00:20:43.294 "data_size": 7936 00:20:43.294 }, 00:20:43.294 { 00:20:43.294 "name": null, 00:20:43.294 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:43.294 "is_configured": false, 00:20:43.294 "data_offset": 256, 00:20:43.294 "data_size": 7936 00:20:43.294 } 00:20:43.294 ] 00:20:43.294 }' 00:20:43.295 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.295 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.864 [2024-11-20 14:31:22.569878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:43.864 [2024-11-20 14:31:22.569982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.864 [2024-11-20 14:31:22.570028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:43.864 [2024-11-20 14:31:22.570048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.864 [2024-11-20 14:31:22.570273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.864 [2024-11-20 14:31:22.570306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:43.864 [2024-11-20 14:31:22.570376] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:43.864 [2024-11-20 14:31:22.570411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:43.864 [2024-11-20 14:31:22.570527] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:43.864 [2024-11-20 14:31:22.570547] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:43.864 [2024-11-20 14:31:22.570639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:43.864 [2024-11-20 14:31:22.570729] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:43.864 [2024-11-20 14:31:22.570744] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:43.864 [2024-11-20 14:31:22.570830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:43.864 pt2 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.864 "name": "raid_bdev1", 00:20:43.864 "uuid": "870abaa7-e871-4fdf-9322-21b4eed9adaf", 00:20:43.864 "strip_size_kb": 0, 00:20:43.864 "state": "online", 00:20:43.864 "raid_level": "raid1", 00:20:43.864 "superblock": true, 00:20:43.864 "num_base_bdevs": 2, 00:20:43.864 "num_base_bdevs_discovered": 2, 00:20:43.864 "num_base_bdevs_operational": 2, 00:20:43.864 "base_bdevs_list": [ 00:20:43.864 { 00:20:43.864 "name": "pt1", 00:20:43.864 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:43.864 "is_configured": true, 00:20:43.864 "data_offset": 256, 00:20:43.864 "data_size": 7936 00:20:43.864 }, 00:20:43.864 { 00:20:43.864 "name": "pt2", 00:20:43.864 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:43.864 "is_configured": true, 00:20:43.864 "data_offset": 256, 00:20:43.864 "data_size": 7936 00:20:43.864 } 00:20:43.864 ] 00:20:43.864 }' 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.864 14:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:44.124 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:44.124 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:44.124 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:44.124 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:44.124 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:44.124 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:44.124 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:44.124 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:44.124 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.124 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:44.124 [2024-11-20 14:31:23.078359] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:44.124 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.383 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:44.383 "name": "raid_bdev1", 00:20:44.383 "aliases": [ 00:20:44.383 "870abaa7-e871-4fdf-9322-21b4eed9adaf" 00:20:44.383 ], 00:20:44.383 "product_name": "Raid Volume", 00:20:44.383 "block_size": 4128, 00:20:44.383 "num_blocks": 7936, 00:20:44.383 "uuid": "870abaa7-e871-4fdf-9322-21b4eed9adaf", 00:20:44.383 "md_size": 32, 00:20:44.383 "md_interleave": true, 00:20:44.383 "dif_type": 0, 00:20:44.383 "assigned_rate_limits": { 00:20:44.384 "rw_ios_per_sec": 0, 00:20:44.384 "rw_mbytes_per_sec": 0, 00:20:44.384 "r_mbytes_per_sec": 0, 00:20:44.384 "w_mbytes_per_sec": 0 00:20:44.384 }, 00:20:44.384 "claimed": false, 00:20:44.384 "zoned": false, 00:20:44.384 "supported_io_types": { 00:20:44.384 "read": true, 00:20:44.384 "write": true, 00:20:44.384 "unmap": false, 00:20:44.384 "flush": false, 00:20:44.384 "reset": true, 00:20:44.384 "nvme_admin": false, 00:20:44.384 "nvme_io": false, 00:20:44.384 "nvme_io_md": false, 00:20:44.384 "write_zeroes": true, 00:20:44.384 "zcopy": false, 00:20:44.384 "get_zone_info": false, 00:20:44.384 "zone_management": false, 00:20:44.384 "zone_append": false, 00:20:44.384 "compare": false, 00:20:44.384 "compare_and_write": false, 00:20:44.384 "abort": false, 00:20:44.384 "seek_hole": false, 00:20:44.384 "seek_data": false, 00:20:44.384 "copy": false, 00:20:44.384 "nvme_iov_md": false 00:20:44.384 }, 00:20:44.384 "memory_domains": [ 00:20:44.384 { 00:20:44.384 "dma_device_id": "system", 00:20:44.384 "dma_device_type": 1 00:20:44.384 }, 00:20:44.384 { 00:20:44.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.384 "dma_device_type": 2 00:20:44.384 }, 00:20:44.384 { 00:20:44.384 "dma_device_id": "system", 00:20:44.384 "dma_device_type": 1 00:20:44.384 }, 00:20:44.384 { 00:20:44.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.384 "dma_device_type": 2 00:20:44.384 } 00:20:44.384 ], 00:20:44.384 "driver_specific": { 00:20:44.384 "raid": { 00:20:44.384 "uuid": "870abaa7-e871-4fdf-9322-21b4eed9adaf", 00:20:44.384 "strip_size_kb": 0, 00:20:44.384 "state": "online", 00:20:44.384 "raid_level": "raid1", 00:20:44.384 "superblock": true, 00:20:44.384 "num_base_bdevs": 2, 00:20:44.384 "num_base_bdevs_discovered": 2, 00:20:44.384 "num_base_bdevs_operational": 2, 00:20:44.384 "base_bdevs_list": [ 00:20:44.384 { 00:20:44.384 "name": "pt1", 00:20:44.384 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:44.384 "is_configured": true, 00:20:44.384 "data_offset": 256, 00:20:44.384 "data_size": 7936 00:20:44.384 }, 00:20:44.384 { 00:20:44.384 "name": "pt2", 00:20:44.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:44.384 "is_configured": true, 00:20:44.384 "data_offset": 256, 00:20:44.384 "data_size": 7936 00:20:44.384 } 00:20:44.384 ] 00:20:44.384 } 00:20:44.384 } 00:20:44.384 }' 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:44.384 pt2' 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:44.384 [2024-11-20 14:31:23.318432] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 870abaa7-e871-4fdf-9322-21b4eed9adaf '!=' 870abaa7-e871-4fdf-9322-21b4eed9adaf ']' 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.384 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:44.384 [2024-11-20 14:31:23.358167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:44.644 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.644 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:44.644 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:44.644 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:44.644 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:44.644 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:44.644 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:44.644 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:44.644 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:44.644 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:44.644 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:44.644 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.644 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.644 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:44.644 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.644 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.644 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:44.644 "name": "raid_bdev1", 00:20:44.644 "uuid": "870abaa7-e871-4fdf-9322-21b4eed9adaf", 00:20:44.644 "strip_size_kb": 0, 00:20:44.644 "state": "online", 00:20:44.644 "raid_level": "raid1", 00:20:44.644 "superblock": true, 00:20:44.644 "num_base_bdevs": 2, 00:20:44.644 "num_base_bdevs_discovered": 1, 00:20:44.644 "num_base_bdevs_operational": 1, 00:20:44.644 "base_bdevs_list": [ 00:20:44.644 { 00:20:44.644 "name": null, 00:20:44.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.644 "is_configured": false, 00:20:44.644 "data_offset": 0, 00:20:44.644 "data_size": 7936 00:20:44.644 }, 00:20:44.644 { 00:20:44.644 "name": "pt2", 00:20:44.644 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:44.644 "is_configured": true, 00:20:44.644 "data_offset": 256, 00:20:44.644 "data_size": 7936 00:20:44.644 } 00:20:44.644 ] 00:20:44.644 }' 00:20:44.644 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:44.644 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:44.903 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:44.903 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.903 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:44.903 [2024-11-20 14:31:23.822243] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:44.903 [2024-11-20 14:31:23.822277] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:44.903 [2024-11-20 14:31:23.822372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:44.903 [2024-11-20 14:31:23.822438] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:44.903 [2024-11-20 14:31:23.822458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:44.903 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.903 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:44.903 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.903 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.903 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:44.903 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.903 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:44.903 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:44.903 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:44.903 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:44.903 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:44.903 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.904 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:45.162 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.162 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:45.162 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:45.162 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:45.162 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:45.162 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:20:45.162 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:45.162 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.162 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:45.162 [2024-11-20 14:31:23.894354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:45.162 [2024-11-20 14:31:23.894443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.162 [2024-11-20 14:31:23.894470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:45.162 [2024-11-20 14:31:23.894488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.162 [2024-11-20 14:31:23.897081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.162 [2024-11-20 14:31:23.897256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:45.162 [2024-11-20 14:31:23.897348] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:45.162 [2024-11-20 14:31:23.897418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:45.163 [2024-11-20 14:31:23.897520] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:45.163 [2024-11-20 14:31:23.897542] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:45.163 [2024-11-20 14:31:23.897659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:45.163 [2024-11-20 14:31:23.897752] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:45.163 [2024-11-20 14:31:23.897767] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:45.163 [2024-11-20 14:31:23.897857] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.163 pt2 00:20:45.163 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.163 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:45.163 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:45.163 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:45.163 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:45.163 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:45.163 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:45.163 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.163 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.163 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.163 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.163 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.163 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.163 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.163 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:45.163 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.163 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.163 "name": "raid_bdev1", 00:20:45.163 "uuid": "870abaa7-e871-4fdf-9322-21b4eed9adaf", 00:20:45.163 "strip_size_kb": 0, 00:20:45.163 "state": "online", 00:20:45.163 "raid_level": "raid1", 00:20:45.163 "superblock": true, 00:20:45.163 "num_base_bdevs": 2, 00:20:45.163 "num_base_bdevs_discovered": 1, 00:20:45.163 "num_base_bdevs_operational": 1, 00:20:45.163 "base_bdevs_list": [ 00:20:45.163 { 00:20:45.163 "name": null, 00:20:45.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.163 "is_configured": false, 00:20:45.163 "data_offset": 256, 00:20:45.163 "data_size": 7936 00:20:45.163 }, 00:20:45.163 { 00:20:45.163 "name": "pt2", 00:20:45.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:45.163 "is_configured": true, 00:20:45.163 "data_offset": 256, 00:20:45.163 "data_size": 7936 00:20:45.163 } 00:20:45.163 ] 00:20:45.163 }' 00:20:45.163 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.163 14:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:45.422 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:45.422 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.422 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:45.422 [2024-11-20 14:31:24.374422] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:45.422 [2024-11-20 14:31:24.374465] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:45.422 [2024-11-20 14:31:24.374556] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:45.422 [2024-11-20 14:31:24.374628] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:45.422 [2024-11-20 14:31:24.374643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:45.422 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.422 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.422 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.422 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:45.422 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:45.422 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.681 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:45.681 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:45.681 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:45.681 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:45.681 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.681 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:45.681 [2024-11-20 14:31:24.434486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:45.681 [2024-11-20 14:31:24.434568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.681 [2024-11-20 14:31:24.434601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:45.682 [2024-11-20 14:31:24.434616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.682 [2024-11-20 14:31:24.437204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.682 [2024-11-20 14:31:24.437252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:45.682 [2024-11-20 14:31:24.437337] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:45.682 [2024-11-20 14:31:24.437402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:45.682 [2024-11-20 14:31:24.437539] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:45.682 [2024-11-20 14:31:24.437557] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:45.682 [2024-11-20 14:31:24.437583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:45.682 [2024-11-20 14:31:24.437655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:45.682 [2024-11-20 14:31:24.437764] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:45.682 [2024-11-20 14:31:24.437779] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:45.682 [2024-11-20 14:31:24.437874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:45.682 [2024-11-20 14:31:24.437955] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:45.682 [2024-11-20 14:31:24.437973] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:45.682 [2024-11-20 14:31:24.438093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.682 pt1 00:20:45.682 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.682 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:45.682 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:45.682 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:45.682 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:45.682 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:45.682 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:45.682 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:45.682 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.682 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.682 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.682 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.682 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.682 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.682 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.682 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:45.682 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.682 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.682 "name": "raid_bdev1", 00:20:45.682 "uuid": "870abaa7-e871-4fdf-9322-21b4eed9adaf", 00:20:45.682 "strip_size_kb": 0, 00:20:45.682 "state": "online", 00:20:45.682 "raid_level": "raid1", 00:20:45.682 "superblock": true, 00:20:45.682 "num_base_bdevs": 2, 00:20:45.682 "num_base_bdevs_discovered": 1, 00:20:45.682 "num_base_bdevs_operational": 1, 00:20:45.682 "base_bdevs_list": [ 00:20:45.682 { 00:20:45.682 "name": null, 00:20:45.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.682 "is_configured": false, 00:20:45.682 "data_offset": 256, 00:20:45.682 "data_size": 7936 00:20:45.682 }, 00:20:45.682 { 00:20:45.682 "name": "pt2", 00:20:45.682 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:45.682 "is_configured": true, 00:20:45.682 "data_offset": 256, 00:20:45.682 "data_size": 7936 00:20:45.682 } 00:20:45.682 ] 00:20:45.682 }' 00:20:45.682 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.682 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:46.248 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:46.248 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:46.249 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.249 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:46.249 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.249 14:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:46.249 14:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:46.249 14:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:46.249 14:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.249 14:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:46.249 [2024-11-20 14:31:25.010956] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:46.249 14:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.249 14:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 870abaa7-e871-4fdf-9322-21b4eed9adaf '!=' 870abaa7-e871-4fdf-9322-21b4eed9adaf ']' 00:20:46.249 14:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89206 00:20:46.249 14:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89206 ']' 00:20:46.249 14:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89206 00:20:46.249 14:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:46.249 14:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.249 14:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89206 00:20:46.249 killing process with pid 89206 00:20:46.249 14:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:46.249 14:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:46.249 14:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89206' 00:20:46.249 14:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89206 00:20:46.249 [2024-11-20 14:31:25.082700] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:46.249 14:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89206 00:20:46.249 [2024-11-20 14:31:25.082814] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:46.249 [2024-11-20 14:31:25.082882] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:46.249 [2024-11-20 14:31:25.082905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:46.507 [2024-11-20 14:31:25.271304] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:47.444 ************************************ 00:20:47.444 END TEST raid_superblock_test_md_interleaved 00:20:47.444 ************************************ 00:20:47.444 14:31:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:20:47.444 00:20:47.444 real 0m6.511s 00:20:47.444 user 0m10.248s 00:20:47.444 sys 0m0.963s 00:20:47.444 14:31:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:47.444 14:31:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:47.444 14:31:26 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:20:47.444 14:31:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:47.444 14:31:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:47.444 14:31:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:47.444 ************************************ 00:20:47.444 START TEST raid_rebuild_test_sb_md_interleaved 00:20:47.444 ************************************ 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89535 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89535 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89535 ']' 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.444 14:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:47.703 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:47.703 Zero copy mechanism will not be used. 00:20:47.703 [2024-11-20 14:31:26.485431] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:20:47.703 [2024-11-20 14:31:26.485610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89535 ] 00:20:47.703 [2024-11-20 14:31:26.664796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.961 [2024-11-20 14:31:26.794148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.218 [2024-11-20 14:31:26.996600] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:48.218 [2024-11-20 14:31:26.996685] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:48.786 BaseBdev1_malloc 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:48.786 [2024-11-20 14:31:27.580748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:48.786 [2024-11-20 14:31:27.581307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.786 [2024-11-20 14:31:27.581368] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:48.786 [2024-11-20 14:31:27.581393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.786 [2024-11-20 14:31:27.584450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.786 [2024-11-20 14:31:27.584827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:48.786 BaseBdev1 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:48.786 BaseBdev2_malloc 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:48.786 [2024-11-20 14:31:27.635599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:48.786 [2024-11-20 14:31:27.635945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.786 [2024-11-20 14:31:27.636018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:48.786 [2024-11-20 14:31:27.636045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.786 [2024-11-20 14:31:27.638803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.786 [2024-11-20 14:31:27.638893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:48.786 BaseBdev2 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:48.786 spare_malloc 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:48.786 spare_delay 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:48.786 [2024-11-20 14:31:27.700536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:48.786 [2024-11-20 14:31:27.700640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.786 [2024-11-20 14:31:27.700681] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:48.786 [2024-11-20 14:31:27.700701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.786 [2024-11-20 14:31:27.703436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.786 [2024-11-20 14:31:27.703504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:48.786 spare 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.786 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:48.787 [2024-11-20 14:31:27.708634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:48.787 [2024-11-20 14:31:27.711438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:48.787 [2024-11-20 14:31:27.711740] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:48.787 [2024-11-20 14:31:27.711767] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:48.787 [2024-11-20 14:31:27.711902] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:48.787 [2024-11-20 14:31:27.712227] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:48.787 [2024-11-20 14:31:27.712286] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:48.787 [2024-11-20 14:31:27.712712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:48.787 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.787 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:48.787 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:48.787 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:48.787 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:48.787 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:48.787 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:48.787 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.787 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.787 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.787 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.787 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.787 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.787 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.787 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:48.787 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.787 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.787 "name": "raid_bdev1", 00:20:48.787 "uuid": "b4c5212b-2552-4b9b-a309-ee0b2a1fe99e", 00:20:48.787 "strip_size_kb": 0, 00:20:48.787 "state": "online", 00:20:48.787 "raid_level": "raid1", 00:20:48.787 "superblock": true, 00:20:48.787 "num_base_bdevs": 2, 00:20:48.787 "num_base_bdevs_discovered": 2, 00:20:48.787 "num_base_bdevs_operational": 2, 00:20:48.787 "base_bdevs_list": [ 00:20:48.787 { 00:20:48.787 "name": "BaseBdev1", 00:20:48.787 "uuid": "3abbe6b0-3f13-518d-9be1-2614fbe02c67", 00:20:48.787 "is_configured": true, 00:20:48.787 "data_offset": 256, 00:20:48.787 "data_size": 7936 00:20:48.787 }, 00:20:48.787 { 00:20:48.787 "name": "BaseBdev2", 00:20:48.787 "uuid": "2ca9e19d-d9c2-5170-9931-4fb6e213e0dd", 00:20:48.787 "is_configured": true, 00:20:48.787 "data_offset": 256, 00:20:48.787 "data_size": 7936 00:20:48.787 } 00:20:48.787 ] 00:20:48.787 }' 00:20:48.787 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.787 14:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.353 [2024-11-20 14:31:28.209250] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.353 [2024-11-20 14:31:28.324896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:49.353 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:49.612 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.612 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.612 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.612 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.612 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.612 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:49.612 "name": "raid_bdev1", 00:20:49.612 "uuid": "b4c5212b-2552-4b9b-a309-ee0b2a1fe99e", 00:20:49.612 "strip_size_kb": 0, 00:20:49.612 "state": "online", 00:20:49.612 "raid_level": "raid1", 00:20:49.612 "superblock": true, 00:20:49.612 "num_base_bdevs": 2, 00:20:49.612 "num_base_bdevs_discovered": 1, 00:20:49.612 "num_base_bdevs_operational": 1, 00:20:49.612 "base_bdevs_list": [ 00:20:49.612 { 00:20:49.612 "name": null, 00:20:49.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.612 "is_configured": false, 00:20:49.612 "data_offset": 0, 00:20:49.612 "data_size": 7936 00:20:49.612 }, 00:20:49.612 { 00:20:49.612 "name": "BaseBdev2", 00:20:49.612 "uuid": "2ca9e19d-d9c2-5170-9931-4fb6e213e0dd", 00:20:49.612 "is_configured": true, 00:20:49.612 "data_offset": 256, 00:20:49.612 "data_size": 7936 00:20:49.612 } 00:20:49.612 ] 00:20:49.612 }' 00:20:49.612 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:49.612 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.869 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:49.869 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.869 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.869 [2024-11-20 14:31:28.809059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:49.869 [2024-11-20 14:31:28.825574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:49.869 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.869 14:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:49.869 [2024-11-20 14:31:28.828139] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:51.243 14:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:51.243 14:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:51.243 14:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:51.243 14:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:51.243 14:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:51.243 14:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.243 14:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.243 14:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.243 14:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:51.243 14:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.243 14:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:51.243 "name": "raid_bdev1", 00:20:51.243 "uuid": "b4c5212b-2552-4b9b-a309-ee0b2a1fe99e", 00:20:51.243 "strip_size_kb": 0, 00:20:51.243 "state": "online", 00:20:51.243 "raid_level": "raid1", 00:20:51.243 "superblock": true, 00:20:51.243 "num_base_bdevs": 2, 00:20:51.243 "num_base_bdevs_discovered": 2, 00:20:51.243 "num_base_bdevs_operational": 2, 00:20:51.243 "process": { 00:20:51.243 "type": "rebuild", 00:20:51.243 "target": "spare", 00:20:51.243 "progress": { 00:20:51.243 "blocks": 2560, 00:20:51.243 "percent": 32 00:20:51.243 } 00:20:51.243 }, 00:20:51.243 "base_bdevs_list": [ 00:20:51.243 { 00:20:51.243 "name": "spare", 00:20:51.243 "uuid": "85446827-bfad-5081-8618-4cad3b732f8a", 00:20:51.243 "is_configured": true, 00:20:51.243 "data_offset": 256, 00:20:51.243 "data_size": 7936 00:20:51.243 }, 00:20:51.243 { 00:20:51.243 "name": "BaseBdev2", 00:20:51.243 "uuid": "2ca9e19d-d9c2-5170-9931-4fb6e213e0dd", 00:20:51.243 "is_configured": true, 00:20:51.243 "data_offset": 256, 00:20:51.243 "data_size": 7936 00:20:51.243 } 00:20:51.243 ] 00:20:51.243 }' 00:20:51.243 14:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:51.243 14:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:51.243 14:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:51.243 14:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:51.243 14:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:51.243 14:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.243 14:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:51.243 [2024-11-20 14:31:29.973150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:51.243 [2024-11-20 14:31:30.037176] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:51.243 [2024-11-20 14:31:30.037276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:51.243 [2024-11-20 14:31:30.037300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:51.243 [2024-11-20 14:31:30.037321] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:51.243 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.243 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:51.243 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:51.243 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:51.243 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:51.243 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:51.243 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:51.243 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:51.243 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:51.243 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:51.243 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:51.243 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.244 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.244 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.244 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:51.244 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.244 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:51.244 "name": "raid_bdev1", 00:20:51.244 "uuid": "b4c5212b-2552-4b9b-a309-ee0b2a1fe99e", 00:20:51.244 "strip_size_kb": 0, 00:20:51.244 "state": "online", 00:20:51.244 "raid_level": "raid1", 00:20:51.244 "superblock": true, 00:20:51.244 "num_base_bdevs": 2, 00:20:51.244 "num_base_bdevs_discovered": 1, 00:20:51.244 "num_base_bdevs_operational": 1, 00:20:51.244 "base_bdevs_list": [ 00:20:51.244 { 00:20:51.244 "name": null, 00:20:51.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.244 "is_configured": false, 00:20:51.244 "data_offset": 0, 00:20:51.244 "data_size": 7936 00:20:51.244 }, 00:20:51.244 { 00:20:51.244 "name": "BaseBdev2", 00:20:51.244 "uuid": "2ca9e19d-d9c2-5170-9931-4fb6e213e0dd", 00:20:51.244 "is_configured": true, 00:20:51.244 "data_offset": 256, 00:20:51.244 "data_size": 7936 00:20:51.244 } 00:20:51.244 ] 00:20:51.244 }' 00:20:51.244 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:51.244 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:51.811 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:51.811 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:51.811 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:51.811 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:51.811 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:51.811 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.811 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.811 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.811 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:51.811 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.811 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:51.811 "name": "raid_bdev1", 00:20:51.811 "uuid": "b4c5212b-2552-4b9b-a309-ee0b2a1fe99e", 00:20:51.811 "strip_size_kb": 0, 00:20:51.811 "state": "online", 00:20:51.811 "raid_level": "raid1", 00:20:51.811 "superblock": true, 00:20:51.811 "num_base_bdevs": 2, 00:20:51.811 "num_base_bdevs_discovered": 1, 00:20:51.811 "num_base_bdevs_operational": 1, 00:20:51.811 "base_bdevs_list": [ 00:20:51.811 { 00:20:51.811 "name": null, 00:20:51.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.811 "is_configured": false, 00:20:51.811 "data_offset": 0, 00:20:51.811 "data_size": 7936 00:20:51.811 }, 00:20:51.811 { 00:20:51.811 "name": "BaseBdev2", 00:20:51.811 "uuid": "2ca9e19d-d9c2-5170-9931-4fb6e213e0dd", 00:20:51.811 "is_configured": true, 00:20:51.811 "data_offset": 256, 00:20:51.811 "data_size": 7936 00:20:51.811 } 00:20:51.811 ] 00:20:51.811 }' 00:20:51.811 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:51.811 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:51.811 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:51.811 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:51.811 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:51.811 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.811 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:51.811 [2024-11-20 14:31:30.737622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:51.811 [2024-11-20 14:31:30.753326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:51.811 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.811 14:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:51.811 [2024-11-20 14:31:30.755847] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:52.840 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:52.840 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:52.840 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:52.840 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:52.840 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:52.840 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.840 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.840 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.840 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:52.840 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.840 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:52.840 "name": "raid_bdev1", 00:20:52.840 "uuid": "b4c5212b-2552-4b9b-a309-ee0b2a1fe99e", 00:20:52.840 "strip_size_kb": 0, 00:20:52.840 "state": "online", 00:20:52.840 "raid_level": "raid1", 00:20:52.840 "superblock": true, 00:20:52.840 "num_base_bdevs": 2, 00:20:52.840 "num_base_bdevs_discovered": 2, 00:20:52.840 "num_base_bdevs_operational": 2, 00:20:52.840 "process": { 00:20:52.841 "type": "rebuild", 00:20:52.841 "target": "spare", 00:20:52.841 "progress": { 00:20:52.841 "blocks": 2560, 00:20:52.841 "percent": 32 00:20:52.841 } 00:20:52.841 }, 00:20:52.841 "base_bdevs_list": [ 00:20:52.841 { 00:20:52.841 "name": "spare", 00:20:52.841 "uuid": "85446827-bfad-5081-8618-4cad3b732f8a", 00:20:52.841 "is_configured": true, 00:20:52.841 "data_offset": 256, 00:20:52.841 "data_size": 7936 00:20:52.841 }, 00:20:52.841 { 00:20:52.841 "name": "BaseBdev2", 00:20:52.841 "uuid": "2ca9e19d-d9c2-5170-9931-4fb6e213e0dd", 00:20:52.841 "is_configured": true, 00:20:52.841 "data_offset": 256, 00:20:52.841 "data_size": 7936 00:20:52.841 } 00:20:52.841 ] 00:20:52.841 }' 00:20:52.841 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:53.098 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:53.098 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:53.098 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:53.098 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:53.098 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:53.098 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:53.099 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:53.099 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:53.099 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:53.099 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=798 00:20:53.099 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:53.099 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:53.099 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:53.099 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:53.099 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:53.099 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:53.099 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.099 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.099 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.099 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:53.099 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.099 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:53.099 "name": "raid_bdev1", 00:20:53.099 "uuid": "b4c5212b-2552-4b9b-a309-ee0b2a1fe99e", 00:20:53.099 "strip_size_kb": 0, 00:20:53.099 "state": "online", 00:20:53.099 "raid_level": "raid1", 00:20:53.099 "superblock": true, 00:20:53.099 "num_base_bdevs": 2, 00:20:53.099 "num_base_bdevs_discovered": 2, 00:20:53.099 "num_base_bdevs_operational": 2, 00:20:53.099 "process": { 00:20:53.099 "type": "rebuild", 00:20:53.099 "target": "spare", 00:20:53.099 "progress": { 00:20:53.099 "blocks": 2816, 00:20:53.099 "percent": 35 00:20:53.099 } 00:20:53.099 }, 00:20:53.099 "base_bdevs_list": [ 00:20:53.099 { 00:20:53.099 "name": "spare", 00:20:53.099 "uuid": "85446827-bfad-5081-8618-4cad3b732f8a", 00:20:53.099 "is_configured": true, 00:20:53.099 "data_offset": 256, 00:20:53.099 "data_size": 7936 00:20:53.099 }, 00:20:53.099 { 00:20:53.099 "name": "BaseBdev2", 00:20:53.099 "uuid": "2ca9e19d-d9c2-5170-9931-4fb6e213e0dd", 00:20:53.099 "is_configured": true, 00:20:53.099 "data_offset": 256, 00:20:53.099 "data_size": 7936 00:20:53.099 } 00:20:53.099 ] 00:20:53.099 }' 00:20:53.099 14:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:53.099 14:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:53.099 14:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:53.099 14:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:53.099 14:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:54.535 14:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:54.535 14:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:54.535 14:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:54.535 14:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:54.535 14:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:54.535 14:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:54.535 14:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.535 14:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.535 14:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:54.535 14:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.535 14:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.535 14:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:54.535 "name": "raid_bdev1", 00:20:54.535 "uuid": "b4c5212b-2552-4b9b-a309-ee0b2a1fe99e", 00:20:54.535 "strip_size_kb": 0, 00:20:54.535 "state": "online", 00:20:54.535 "raid_level": "raid1", 00:20:54.535 "superblock": true, 00:20:54.535 "num_base_bdevs": 2, 00:20:54.535 "num_base_bdevs_discovered": 2, 00:20:54.535 "num_base_bdevs_operational": 2, 00:20:54.535 "process": { 00:20:54.535 "type": "rebuild", 00:20:54.535 "target": "spare", 00:20:54.535 "progress": { 00:20:54.535 "blocks": 5888, 00:20:54.535 "percent": 74 00:20:54.535 } 00:20:54.535 }, 00:20:54.535 "base_bdevs_list": [ 00:20:54.535 { 00:20:54.535 "name": "spare", 00:20:54.535 "uuid": "85446827-bfad-5081-8618-4cad3b732f8a", 00:20:54.535 "is_configured": true, 00:20:54.535 "data_offset": 256, 00:20:54.535 "data_size": 7936 00:20:54.535 }, 00:20:54.535 { 00:20:54.535 "name": "BaseBdev2", 00:20:54.535 "uuid": "2ca9e19d-d9c2-5170-9931-4fb6e213e0dd", 00:20:54.535 "is_configured": true, 00:20:54.535 "data_offset": 256, 00:20:54.535 "data_size": 7936 00:20:54.535 } 00:20:54.535 ] 00:20:54.535 }' 00:20:54.535 14:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:54.535 14:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:54.535 14:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:54.535 14:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:54.535 14:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:55.100 [2024-11-20 14:31:33.879555] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:55.100 [2024-11-20 14:31:33.879669] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:55.100 [2024-11-20 14:31:33.879851] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.358 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:55.358 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:55.358 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:55.358 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:55.358 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:55.358 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:55.358 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.358 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.358 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:55.358 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.358 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.358 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:55.358 "name": "raid_bdev1", 00:20:55.358 "uuid": "b4c5212b-2552-4b9b-a309-ee0b2a1fe99e", 00:20:55.358 "strip_size_kb": 0, 00:20:55.358 "state": "online", 00:20:55.358 "raid_level": "raid1", 00:20:55.358 "superblock": true, 00:20:55.358 "num_base_bdevs": 2, 00:20:55.358 "num_base_bdevs_discovered": 2, 00:20:55.358 "num_base_bdevs_operational": 2, 00:20:55.358 "base_bdevs_list": [ 00:20:55.358 { 00:20:55.358 "name": "spare", 00:20:55.358 "uuid": "85446827-bfad-5081-8618-4cad3b732f8a", 00:20:55.358 "is_configured": true, 00:20:55.358 "data_offset": 256, 00:20:55.358 "data_size": 7936 00:20:55.358 }, 00:20:55.358 { 00:20:55.358 "name": "BaseBdev2", 00:20:55.358 "uuid": "2ca9e19d-d9c2-5170-9931-4fb6e213e0dd", 00:20:55.358 "is_configured": true, 00:20:55.358 "data_offset": 256, 00:20:55.358 "data_size": 7936 00:20:55.358 } 00:20:55.358 ] 00:20:55.358 }' 00:20:55.358 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:55.616 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:55.617 "name": "raid_bdev1", 00:20:55.617 "uuid": "b4c5212b-2552-4b9b-a309-ee0b2a1fe99e", 00:20:55.617 "strip_size_kb": 0, 00:20:55.617 "state": "online", 00:20:55.617 "raid_level": "raid1", 00:20:55.617 "superblock": true, 00:20:55.617 "num_base_bdevs": 2, 00:20:55.617 "num_base_bdevs_discovered": 2, 00:20:55.617 "num_base_bdevs_operational": 2, 00:20:55.617 "base_bdevs_list": [ 00:20:55.617 { 00:20:55.617 "name": "spare", 00:20:55.617 "uuid": "85446827-bfad-5081-8618-4cad3b732f8a", 00:20:55.617 "is_configured": true, 00:20:55.617 "data_offset": 256, 00:20:55.617 "data_size": 7936 00:20:55.617 }, 00:20:55.617 { 00:20:55.617 "name": "BaseBdev2", 00:20:55.617 "uuid": "2ca9e19d-d9c2-5170-9931-4fb6e213e0dd", 00:20:55.617 "is_configured": true, 00:20:55.617 "data_offset": 256, 00:20:55.617 "data_size": 7936 00:20:55.617 } 00:20:55.617 ] 00:20:55.617 }' 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:55.617 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.875 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.875 "name": "raid_bdev1", 00:20:55.875 "uuid": "b4c5212b-2552-4b9b-a309-ee0b2a1fe99e", 00:20:55.875 "strip_size_kb": 0, 00:20:55.875 "state": "online", 00:20:55.875 "raid_level": "raid1", 00:20:55.875 "superblock": true, 00:20:55.875 "num_base_bdevs": 2, 00:20:55.875 "num_base_bdevs_discovered": 2, 00:20:55.875 "num_base_bdevs_operational": 2, 00:20:55.875 "base_bdevs_list": [ 00:20:55.875 { 00:20:55.875 "name": "spare", 00:20:55.875 "uuid": "85446827-bfad-5081-8618-4cad3b732f8a", 00:20:55.875 "is_configured": true, 00:20:55.875 "data_offset": 256, 00:20:55.875 "data_size": 7936 00:20:55.875 }, 00:20:55.875 { 00:20:55.875 "name": "BaseBdev2", 00:20:55.875 "uuid": "2ca9e19d-d9c2-5170-9931-4fb6e213e0dd", 00:20:55.875 "is_configured": true, 00:20:55.875 "data_offset": 256, 00:20:55.875 "data_size": 7936 00:20:55.875 } 00:20:55.875 ] 00:20:55.875 }' 00:20:55.875 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.875 14:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:56.442 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:56.442 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.442 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:56.442 [2024-11-20 14:31:35.124044] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:56.442 [2024-11-20 14:31:35.124235] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:56.442 [2024-11-20 14:31:35.124376] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:56.442 [2024-11-20 14:31:35.124474] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:56.442 [2024-11-20 14:31:35.124491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:56.442 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.442 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.442 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:20:56.442 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.442 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:56.442 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.442 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:56.442 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:20:56.442 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:56.442 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:56.442 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.442 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:56.442 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.442 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:56.442 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.442 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:56.442 [2024-11-20 14:31:35.200060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:56.442 [2024-11-20 14:31:35.200135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:56.443 [2024-11-20 14:31:35.200168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:56.443 [2024-11-20 14:31:35.200184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:56.443 [2024-11-20 14:31:35.202787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:56.443 [2024-11-20 14:31:35.202836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:56.443 [2024-11-20 14:31:35.202923] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:56.443 [2024-11-20 14:31:35.203011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:56.443 [2024-11-20 14:31:35.203174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:56.443 spare 00:20:56.443 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.443 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:56.443 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.443 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:56.443 [2024-11-20 14:31:35.303304] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:56.443 [2024-11-20 14:31:35.303617] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:56.443 [2024-11-20 14:31:35.303808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:56.443 [2024-11-20 14:31:35.303966] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:56.443 [2024-11-20 14:31:35.304015] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:56.443 [2024-11-20 14:31:35.304190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:56.443 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.443 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:56.443 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:56.443 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:56.443 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:56.443 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:56.443 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:56.443 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:56.443 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:56.443 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:56.443 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:56.443 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.443 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.443 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.443 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:56.443 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.443 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:56.443 "name": "raid_bdev1", 00:20:56.443 "uuid": "b4c5212b-2552-4b9b-a309-ee0b2a1fe99e", 00:20:56.443 "strip_size_kb": 0, 00:20:56.443 "state": "online", 00:20:56.443 "raid_level": "raid1", 00:20:56.443 "superblock": true, 00:20:56.443 "num_base_bdevs": 2, 00:20:56.443 "num_base_bdevs_discovered": 2, 00:20:56.443 "num_base_bdevs_operational": 2, 00:20:56.443 "base_bdevs_list": [ 00:20:56.443 { 00:20:56.443 "name": "spare", 00:20:56.443 "uuid": "85446827-bfad-5081-8618-4cad3b732f8a", 00:20:56.443 "is_configured": true, 00:20:56.443 "data_offset": 256, 00:20:56.443 "data_size": 7936 00:20:56.443 }, 00:20:56.443 { 00:20:56.443 "name": "BaseBdev2", 00:20:56.443 "uuid": "2ca9e19d-d9c2-5170-9931-4fb6e213e0dd", 00:20:56.443 "is_configured": true, 00:20:56.443 "data_offset": 256, 00:20:56.443 "data_size": 7936 00:20:56.443 } 00:20:56.443 ] 00:20:56.443 }' 00:20:56.443 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:56.443 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:57.010 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:57.010 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.010 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:57.010 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:57.010 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.010 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.010 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.010 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.010 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:57.010 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.010 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.010 "name": "raid_bdev1", 00:20:57.010 "uuid": "b4c5212b-2552-4b9b-a309-ee0b2a1fe99e", 00:20:57.010 "strip_size_kb": 0, 00:20:57.010 "state": "online", 00:20:57.010 "raid_level": "raid1", 00:20:57.010 "superblock": true, 00:20:57.010 "num_base_bdevs": 2, 00:20:57.010 "num_base_bdevs_discovered": 2, 00:20:57.010 "num_base_bdevs_operational": 2, 00:20:57.010 "base_bdevs_list": [ 00:20:57.010 { 00:20:57.010 "name": "spare", 00:20:57.010 "uuid": "85446827-bfad-5081-8618-4cad3b732f8a", 00:20:57.010 "is_configured": true, 00:20:57.010 "data_offset": 256, 00:20:57.010 "data_size": 7936 00:20:57.010 }, 00:20:57.010 { 00:20:57.010 "name": "BaseBdev2", 00:20:57.010 "uuid": "2ca9e19d-d9c2-5170-9931-4fb6e213e0dd", 00:20:57.010 "is_configured": true, 00:20:57.010 "data_offset": 256, 00:20:57.010 "data_size": 7936 00:20:57.010 } 00:20:57.010 ] 00:20:57.010 }' 00:20:57.010 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.010 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:57.010 14:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:57.269 [2024-11-20 14:31:36.076520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:57.269 "name": "raid_bdev1", 00:20:57.269 "uuid": "b4c5212b-2552-4b9b-a309-ee0b2a1fe99e", 00:20:57.269 "strip_size_kb": 0, 00:20:57.269 "state": "online", 00:20:57.269 "raid_level": "raid1", 00:20:57.269 "superblock": true, 00:20:57.269 "num_base_bdevs": 2, 00:20:57.269 "num_base_bdevs_discovered": 1, 00:20:57.269 "num_base_bdevs_operational": 1, 00:20:57.269 "base_bdevs_list": [ 00:20:57.269 { 00:20:57.269 "name": null, 00:20:57.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.269 "is_configured": false, 00:20:57.269 "data_offset": 0, 00:20:57.269 "data_size": 7936 00:20:57.269 }, 00:20:57.269 { 00:20:57.269 "name": "BaseBdev2", 00:20:57.269 "uuid": "2ca9e19d-d9c2-5170-9931-4fb6e213e0dd", 00:20:57.269 "is_configured": true, 00:20:57.269 "data_offset": 256, 00:20:57.269 "data_size": 7936 00:20:57.269 } 00:20:57.269 ] 00:20:57.269 }' 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:57.269 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:57.836 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:57.836 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.836 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:57.836 [2024-11-20 14:31:36.604642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:57.836 [2024-11-20 14:31:36.604907] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:57.836 [2024-11-20 14:31:36.604937] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:57.836 [2024-11-20 14:31:36.605010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:57.836 [2024-11-20 14:31:36.620473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:57.836 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.836 14:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:57.836 [2024-11-20 14:31:36.623046] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:58.771 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:58.771 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:58.771 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:58.771 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:58.771 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:58.771 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.771 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.771 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.771 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:58.771 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.771 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:58.771 "name": "raid_bdev1", 00:20:58.771 "uuid": "b4c5212b-2552-4b9b-a309-ee0b2a1fe99e", 00:20:58.771 "strip_size_kb": 0, 00:20:58.771 "state": "online", 00:20:58.771 "raid_level": "raid1", 00:20:58.771 "superblock": true, 00:20:58.771 "num_base_bdevs": 2, 00:20:58.771 "num_base_bdevs_discovered": 2, 00:20:58.771 "num_base_bdevs_operational": 2, 00:20:58.771 "process": { 00:20:58.771 "type": "rebuild", 00:20:58.771 "target": "spare", 00:20:58.771 "progress": { 00:20:58.771 "blocks": 2560, 00:20:58.771 "percent": 32 00:20:58.771 } 00:20:58.771 }, 00:20:58.771 "base_bdevs_list": [ 00:20:58.771 { 00:20:58.771 "name": "spare", 00:20:58.771 "uuid": "85446827-bfad-5081-8618-4cad3b732f8a", 00:20:58.771 "is_configured": true, 00:20:58.771 "data_offset": 256, 00:20:58.771 "data_size": 7936 00:20:58.771 }, 00:20:58.771 { 00:20:58.771 "name": "BaseBdev2", 00:20:58.771 "uuid": "2ca9e19d-d9c2-5170-9931-4fb6e213e0dd", 00:20:58.771 "is_configured": true, 00:20:58.771 "data_offset": 256, 00:20:58.771 "data_size": 7936 00:20:58.771 } 00:20:58.771 ] 00:20:58.771 }' 00:20:58.771 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:58.771 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:58.771 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:59.029 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:59.029 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:59.029 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.030 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:59.030 [2024-11-20 14:31:37.800335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:59.030 [2024-11-20 14:31:37.832303] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:59.030 [2024-11-20 14:31:37.832420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:59.030 [2024-11-20 14:31:37.832447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:59.030 [2024-11-20 14:31:37.832463] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:59.030 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.030 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:59.030 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:59.030 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:59.030 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:59.030 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:59.030 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:59.030 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:59.030 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:59.030 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:59.030 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:59.030 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.030 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.030 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.030 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:59.030 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.030 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:59.030 "name": "raid_bdev1", 00:20:59.030 "uuid": "b4c5212b-2552-4b9b-a309-ee0b2a1fe99e", 00:20:59.030 "strip_size_kb": 0, 00:20:59.030 "state": "online", 00:20:59.030 "raid_level": "raid1", 00:20:59.030 "superblock": true, 00:20:59.030 "num_base_bdevs": 2, 00:20:59.030 "num_base_bdevs_discovered": 1, 00:20:59.030 "num_base_bdevs_operational": 1, 00:20:59.030 "base_bdevs_list": [ 00:20:59.030 { 00:20:59.030 "name": null, 00:20:59.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.030 "is_configured": false, 00:20:59.030 "data_offset": 0, 00:20:59.030 "data_size": 7936 00:20:59.030 }, 00:20:59.030 { 00:20:59.030 "name": "BaseBdev2", 00:20:59.030 "uuid": "2ca9e19d-d9c2-5170-9931-4fb6e213e0dd", 00:20:59.030 "is_configured": true, 00:20:59.030 "data_offset": 256, 00:20:59.030 "data_size": 7936 00:20:59.030 } 00:20:59.030 ] 00:20:59.030 }' 00:20:59.030 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:59.030 14:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:59.597 14:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:59.598 14:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.598 14:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:59.598 [2024-11-20 14:31:38.400645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:59.598 [2024-11-20 14:31:38.400743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.598 [2024-11-20 14:31:38.400782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:59.598 [2024-11-20 14:31:38.400802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.598 [2024-11-20 14:31:38.401082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.598 [2024-11-20 14:31:38.401114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:59.598 [2024-11-20 14:31:38.401194] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:59.598 [2024-11-20 14:31:38.401218] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:59.598 [2024-11-20 14:31:38.401233] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:59.598 [2024-11-20 14:31:38.401265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:59.598 [2024-11-20 14:31:38.416811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:59.598 spare 00:20:59.598 14:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.598 14:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:59.598 [2024-11-20 14:31:38.419373] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:00.533 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:00.533 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:00.533 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:00.533 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:00.533 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:00.533 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.533 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.533 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:00.533 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.533 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.533 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:00.533 "name": "raid_bdev1", 00:21:00.533 "uuid": "b4c5212b-2552-4b9b-a309-ee0b2a1fe99e", 00:21:00.533 "strip_size_kb": 0, 00:21:00.533 "state": "online", 00:21:00.533 "raid_level": "raid1", 00:21:00.533 "superblock": true, 00:21:00.533 "num_base_bdevs": 2, 00:21:00.533 "num_base_bdevs_discovered": 2, 00:21:00.533 "num_base_bdevs_operational": 2, 00:21:00.533 "process": { 00:21:00.533 "type": "rebuild", 00:21:00.533 "target": "spare", 00:21:00.533 "progress": { 00:21:00.533 "blocks": 2560, 00:21:00.533 "percent": 32 00:21:00.533 } 00:21:00.533 }, 00:21:00.533 "base_bdevs_list": [ 00:21:00.533 { 00:21:00.533 "name": "spare", 00:21:00.533 "uuid": "85446827-bfad-5081-8618-4cad3b732f8a", 00:21:00.533 "is_configured": true, 00:21:00.533 "data_offset": 256, 00:21:00.533 "data_size": 7936 00:21:00.533 }, 00:21:00.533 { 00:21:00.533 "name": "BaseBdev2", 00:21:00.533 "uuid": "2ca9e19d-d9c2-5170-9931-4fb6e213e0dd", 00:21:00.533 "is_configured": true, 00:21:00.533 "data_offset": 256, 00:21:00.533 "data_size": 7936 00:21:00.533 } 00:21:00.533 ] 00:21:00.533 }' 00:21:00.533 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:00.533 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:00.793 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:00.793 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:00.793 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:00.793 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.793 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:00.793 [2024-11-20 14:31:39.560590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:00.793 [2024-11-20 14:31:39.628600] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:00.793 [2024-11-20 14:31:39.628714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.793 [2024-11-20 14:31:39.628745] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:00.793 [2024-11-20 14:31:39.628758] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:00.793 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.793 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:00.793 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:00.793 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:00.793 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:00.793 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:00.793 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:00.793 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.793 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.793 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.793 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.793 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.793 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.793 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:00.793 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.793 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.793 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.793 "name": "raid_bdev1", 00:21:00.793 "uuid": "b4c5212b-2552-4b9b-a309-ee0b2a1fe99e", 00:21:00.793 "strip_size_kb": 0, 00:21:00.793 "state": "online", 00:21:00.793 "raid_level": "raid1", 00:21:00.793 "superblock": true, 00:21:00.793 "num_base_bdevs": 2, 00:21:00.793 "num_base_bdevs_discovered": 1, 00:21:00.793 "num_base_bdevs_operational": 1, 00:21:00.793 "base_bdevs_list": [ 00:21:00.793 { 00:21:00.793 "name": null, 00:21:00.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.793 "is_configured": false, 00:21:00.793 "data_offset": 0, 00:21:00.793 "data_size": 7936 00:21:00.793 }, 00:21:00.793 { 00:21:00.793 "name": "BaseBdev2", 00:21:00.793 "uuid": "2ca9e19d-d9c2-5170-9931-4fb6e213e0dd", 00:21:00.793 "is_configured": true, 00:21:00.793 "data_offset": 256, 00:21:00.793 "data_size": 7936 00:21:00.793 } 00:21:00.793 ] 00:21:00.793 }' 00:21:00.793 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.793 14:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:01.362 14:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:01.362 14:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:01.362 14:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:01.362 14:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:01.362 14:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:01.362 14:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.362 14:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.362 14:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.362 14:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:01.362 14:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.362 14:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:01.362 "name": "raid_bdev1", 00:21:01.362 "uuid": "b4c5212b-2552-4b9b-a309-ee0b2a1fe99e", 00:21:01.362 "strip_size_kb": 0, 00:21:01.362 "state": "online", 00:21:01.362 "raid_level": "raid1", 00:21:01.362 "superblock": true, 00:21:01.362 "num_base_bdevs": 2, 00:21:01.362 "num_base_bdevs_discovered": 1, 00:21:01.362 "num_base_bdevs_operational": 1, 00:21:01.362 "base_bdevs_list": [ 00:21:01.362 { 00:21:01.362 "name": null, 00:21:01.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.362 "is_configured": false, 00:21:01.362 "data_offset": 0, 00:21:01.362 "data_size": 7936 00:21:01.362 }, 00:21:01.362 { 00:21:01.362 "name": "BaseBdev2", 00:21:01.362 "uuid": "2ca9e19d-d9c2-5170-9931-4fb6e213e0dd", 00:21:01.362 "is_configured": true, 00:21:01.362 "data_offset": 256, 00:21:01.362 "data_size": 7936 00:21:01.362 } 00:21:01.362 ] 00:21:01.362 }' 00:21:01.362 14:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:01.362 14:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:01.362 14:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:01.362 14:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:01.362 14:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:01.362 14:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.362 14:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:01.362 14:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.362 14:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:01.362 14:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.362 14:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:01.362 [2024-11-20 14:31:40.316930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:01.362 [2024-11-20 14:31:40.317018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.362 [2024-11-20 14:31:40.317054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:01.362 [2024-11-20 14:31:40.317070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.362 [2024-11-20 14:31:40.317293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.362 [2024-11-20 14:31:40.317319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:01.362 [2024-11-20 14:31:40.317386] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:01.362 [2024-11-20 14:31:40.317407] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:01.362 [2024-11-20 14:31:40.317421] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:01.362 [2024-11-20 14:31:40.317434] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:01.362 BaseBdev1 00:21:01.362 14:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.362 14:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:02.738 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:02.738 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:02.738 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:02.738 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:02.738 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:02.738 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:02.738 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.738 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.738 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.738 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.738 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.738 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.738 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.738 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:02.738 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.738 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.738 "name": "raid_bdev1", 00:21:02.738 "uuid": "b4c5212b-2552-4b9b-a309-ee0b2a1fe99e", 00:21:02.738 "strip_size_kb": 0, 00:21:02.738 "state": "online", 00:21:02.738 "raid_level": "raid1", 00:21:02.738 "superblock": true, 00:21:02.738 "num_base_bdevs": 2, 00:21:02.738 "num_base_bdevs_discovered": 1, 00:21:02.738 "num_base_bdevs_operational": 1, 00:21:02.738 "base_bdevs_list": [ 00:21:02.738 { 00:21:02.738 "name": null, 00:21:02.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.738 "is_configured": false, 00:21:02.738 "data_offset": 0, 00:21:02.738 "data_size": 7936 00:21:02.738 }, 00:21:02.738 { 00:21:02.738 "name": "BaseBdev2", 00:21:02.738 "uuid": "2ca9e19d-d9c2-5170-9931-4fb6e213e0dd", 00:21:02.738 "is_configured": true, 00:21:02.738 "data_offset": 256, 00:21:02.738 "data_size": 7936 00:21:02.738 } 00:21:02.738 ] 00:21:02.738 }' 00:21:02.738 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.738 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:02.996 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:02.996 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:02.996 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:02.996 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:02.996 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:02.996 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.996 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.997 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.997 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:02.997 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.997 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:02.997 "name": "raid_bdev1", 00:21:02.997 "uuid": "b4c5212b-2552-4b9b-a309-ee0b2a1fe99e", 00:21:02.997 "strip_size_kb": 0, 00:21:02.997 "state": "online", 00:21:02.997 "raid_level": "raid1", 00:21:02.997 "superblock": true, 00:21:02.997 "num_base_bdevs": 2, 00:21:02.997 "num_base_bdevs_discovered": 1, 00:21:02.997 "num_base_bdevs_operational": 1, 00:21:02.997 "base_bdevs_list": [ 00:21:02.997 { 00:21:02.997 "name": null, 00:21:02.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.997 "is_configured": false, 00:21:02.997 "data_offset": 0, 00:21:02.997 "data_size": 7936 00:21:02.997 }, 00:21:02.997 { 00:21:02.997 "name": "BaseBdev2", 00:21:02.997 "uuid": "2ca9e19d-d9c2-5170-9931-4fb6e213e0dd", 00:21:02.997 "is_configured": true, 00:21:02.997 "data_offset": 256, 00:21:02.997 "data_size": 7936 00:21:02.997 } 00:21:02.997 ] 00:21:02.997 }' 00:21:02.997 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:02.997 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:02.997 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:03.283 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:03.283 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:03.283 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:21:03.284 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:03.284 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:03.284 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:03.284 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:03.284 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:03.284 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:03.284 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.284 14:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:03.284 [2024-11-20 14:31:41.997520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:03.284 [2024-11-20 14:31:41.997725] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:03.284 [2024-11-20 14:31:41.997754] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:03.284 request: 00:21:03.284 { 00:21:03.284 "base_bdev": "BaseBdev1", 00:21:03.284 "raid_bdev": "raid_bdev1", 00:21:03.284 "method": "bdev_raid_add_base_bdev", 00:21:03.284 "req_id": 1 00:21:03.284 } 00:21:03.284 Got JSON-RPC error response 00:21:03.284 response: 00:21:03.284 { 00:21:03.284 "code": -22, 00:21:03.284 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:03.284 } 00:21:03.284 14:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:03.284 14:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:21:03.284 14:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:03.284 14:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:03.284 14:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:03.284 14:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:04.258 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:04.258 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:04.258 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:04.258 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:04.258 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:04.258 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:04.258 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.258 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.258 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.258 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.258 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.258 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.258 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.258 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.258 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.258 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.258 "name": "raid_bdev1", 00:21:04.258 "uuid": "b4c5212b-2552-4b9b-a309-ee0b2a1fe99e", 00:21:04.258 "strip_size_kb": 0, 00:21:04.258 "state": "online", 00:21:04.258 "raid_level": "raid1", 00:21:04.258 "superblock": true, 00:21:04.258 "num_base_bdevs": 2, 00:21:04.258 "num_base_bdevs_discovered": 1, 00:21:04.258 "num_base_bdevs_operational": 1, 00:21:04.258 "base_bdevs_list": [ 00:21:04.258 { 00:21:04.258 "name": null, 00:21:04.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.258 "is_configured": false, 00:21:04.258 "data_offset": 0, 00:21:04.258 "data_size": 7936 00:21:04.258 }, 00:21:04.258 { 00:21:04.258 "name": "BaseBdev2", 00:21:04.258 "uuid": "2ca9e19d-d9c2-5170-9931-4fb6e213e0dd", 00:21:04.258 "is_configured": true, 00:21:04.258 "data_offset": 256, 00:21:04.258 "data_size": 7936 00:21:04.258 } 00:21:04.258 ] 00:21:04.258 }' 00:21:04.258 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.258 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:04.825 "name": "raid_bdev1", 00:21:04.825 "uuid": "b4c5212b-2552-4b9b-a309-ee0b2a1fe99e", 00:21:04.825 "strip_size_kb": 0, 00:21:04.825 "state": "online", 00:21:04.825 "raid_level": "raid1", 00:21:04.825 "superblock": true, 00:21:04.825 "num_base_bdevs": 2, 00:21:04.825 "num_base_bdevs_discovered": 1, 00:21:04.825 "num_base_bdevs_operational": 1, 00:21:04.825 "base_bdevs_list": [ 00:21:04.825 { 00:21:04.825 "name": null, 00:21:04.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.825 "is_configured": false, 00:21:04.825 "data_offset": 0, 00:21:04.825 "data_size": 7936 00:21:04.825 }, 00:21:04.825 { 00:21:04.825 "name": "BaseBdev2", 00:21:04.825 "uuid": "2ca9e19d-d9c2-5170-9931-4fb6e213e0dd", 00:21:04.825 "is_configured": true, 00:21:04.825 "data_offset": 256, 00:21:04.825 "data_size": 7936 00:21:04.825 } 00:21:04.825 ] 00:21:04.825 }' 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89535 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89535 ']' 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89535 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89535 00:21:04.825 killing process with pid 89535 00:21:04.825 Received shutdown signal, test time was about 60.000000 seconds 00:21:04.825 00:21:04.825 Latency(us) 00:21:04.825 [2024-11-20T14:31:43.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.825 [2024-11-20T14:31:43.807Z] =================================================================================================================== 00:21:04.825 [2024-11-20T14:31:43.807Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89535' 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89535 00:21:04.825 [2024-11-20 14:31:43.742329] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:04.825 14:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89535 00:21:04.825 [2024-11-20 14:31:43.742483] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:04.826 [2024-11-20 14:31:43.742551] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:04.826 [2024-11-20 14:31:43.742570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:05.083 [2024-11-20 14:31:44.013628] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:06.468 14:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:21:06.468 00:21:06.468 real 0m18.648s 00:21:06.468 user 0m25.468s 00:21:06.468 sys 0m1.425s 00:21:06.468 14:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:06.468 ************************************ 00:21:06.468 END TEST raid_rebuild_test_sb_md_interleaved 00:21:06.468 ************************************ 00:21:06.468 14:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:06.468 14:31:45 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:21:06.468 14:31:45 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:21:06.468 14:31:45 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89535 ']' 00:21:06.468 14:31:45 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89535 00:21:06.468 14:31:45 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:21:06.468 ************************************ 00:21:06.468 END TEST bdev_raid 00:21:06.468 ************************************ 00:21:06.468 00:21:06.468 real 13m1.726s 00:21:06.468 user 18m23.613s 00:21:06.468 sys 1m44.359s 00:21:06.468 14:31:45 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:06.468 14:31:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:06.468 14:31:45 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:06.468 14:31:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:06.468 14:31:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:06.468 14:31:45 -- common/autotest_common.sh@10 -- # set +x 00:21:06.468 ************************************ 00:21:06.468 START TEST spdkcli_raid 00:21:06.468 ************************************ 00:21:06.468 14:31:45 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:06.468 * Looking for test storage... 00:21:06.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:06.468 14:31:45 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:06.468 14:31:45 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:21:06.468 14:31:45 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:06.468 14:31:45 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:06.468 14:31:45 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:21:06.468 14:31:45 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:06.468 14:31:45 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:06.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.468 --rc genhtml_branch_coverage=1 00:21:06.468 --rc genhtml_function_coverage=1 00:21:06.468 --rc genhtml_legend=1 00:21:06.468 --rc geninfo_all_blocks=1 00:21:06.468 --rc geninfo_unexecuted_blocks=1 00:21:06.468 00:21:06.468 ' 00:21:06.468 14:31:45 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:06.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.468 --rc genhtml_branch_coverage=1 00:21:06.468 --rc genhtml_function_coverage=1 00:21:06.468 --rc genhtml_legend=1 00:21:06.469 --rc geninfo_all_blocks=1 00:21:06.469 --rc geninfo_unexecuted_blocks=1 00:21:06.469 00:21:06.469 ' 00:21:06.469 14:31:45 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:06.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.469 --rc genhtml_branch_coverage=1 00:21:06.469 --rc genhtml_function_coverage=1 00:21:06.469 --rc genhtml_legend=1 00:21:06.469 --rc geninfo_all_blocks=1 00:21:06.469 --rc geninfo_unexecuted_blocks=1 00:21:06.469 00:21:06.469 ' 00:21:06.469 14:31:45 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:06.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.469 --rc genhtml_branch_coverage=1 00:21:06.469 --rc genhtml_function_coverage=1 00:21:06.469 --rc genhtml_legend=1 00:21:06.469 --rc geninfo_all_blocks=1 00:21:06.469 --rc geninfo_unexecuted_blocks=1 00:21:06.469 00:21:06.469 ' 00:21:06.469 14:31:45 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:06.469 14:31:45 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:06.469 14:31:45 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:06.469 14:31:45 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:21:06.469 14:31:45 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:21:06.469 14:31:45 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:21:06.469 14:31:45 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:21:06.469 14:31:45 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:21:06.469 14:31:45 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:21:06.469 14:31:45 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:21:06.469 14:31:45 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:21:06.469 14:31:45 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:21:06.469 14:31:45 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:21:06.469 14:31:45 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:21:06.469 14:31:45 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:21:06.469 14:31:45 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:21:06.469 14:31:45 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:21:06.469 14:31:45 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:21:06.469 14:31:45 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:21:06.469 14:31:45 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:21:06.469 14:31:45 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:21:06.469 14:31:45 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:21:06.469 14:31:45 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:21:06.469 14:31:45 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:21:06.469 14:31:45 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:21:06.469 14:31:45 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:06.469 14:31:45 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:06.469 14:31:45 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:06.469 14:31:45 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:06.469 14:31:45 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:06.469 14:31:45 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:06.469 14:31:45 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:21:06.469 14:31:45 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:21:06.469 14:31:45 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:06.469 14:31:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:06.469 14:31:45 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:21:06.469 14:31:45 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90217 00:21:06.469 14:31:45 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:21:06.469 14:31:45 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90217 00:21:06.469 14:31:45 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90217 ']' 00:21:06.469 14:31:45 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.469 14:31:45 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:06.469 14:31:45 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.469 14:31:45 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:06.469 14:31:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:06.727 [2024-11-20 14:31:45.464528] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:21:06.727 [2024-11-20 14:31:45.464904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90217 ] 00:21:06.727 [2024-11-20 14:31:45.657660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:06.986 [2024-11-20 14:31:45.822506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.986 [2024-11-20 14:31:45.822516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.923 14:31:46 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:07.923 14:31:46 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:21:07.923 14:31:46 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:21:07.923 14:31:46 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:07.923 14:31:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:07.923 14:31:46 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:21:07.923 14:31:46 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:07.923 14:31:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:07.923 14:31:46 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:21:07.923 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:21:07.923 ' 00:21:09.874 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:21:09.874 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:21:09.874 14:31:48 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:21:09.874 14:31:48 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:09.874 14:31:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:09.874 14:31:48 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:21:09.874 14:31:48 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:09.874 14:31:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:09.874 14:31:48 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:21:09.874 ' 00:21:10.809 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:21:10.809 14:31:49 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:21:10.809 14:31:49 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:10.809 14:31:49 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:10.809 14:31:49 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:21:10.809 14:31:49 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:10.809 14:31:49 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:10.809 14:31:49 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:21:10.809 14:31:49 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:21:11.375 14:31:50 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:21:11.634 14:31:50 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:21:11.634 14:31:50 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:21:11.634 14:31:50 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:11.634 14:31:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:11.634 14:31:50 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:21:11.634 14:31:50 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:11.634 14:31:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:11.634 14:31:50 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:21:11.634 ' 00:21:12.569 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:21:12.828 14:31:51 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:21:12.828 14:31:51 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:12.828 14:31:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:12.828 14:31:51 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:21:12.828 14:31:51 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:12.828 14:31:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:12.828 14:31:51 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:21:12.828 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:21:12.828 ' 00:21:14.205 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:21:14.205 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:21:14.205 14:31:53 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:21:14.205 14:31:53 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:14.205 14:31:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:14.478 14:31:53 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90217 00:21:14.478 14:31:53 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90217 ']' 00:21:14.478 14:31:53 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90217 00:21:14.478 14:31:53 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:21:14.478 14:31:53 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.478 14:31:53 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90217 00:21:14.478 killing process with pid 90217 00:21:14.478 14:31:53 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:14.478 14:31:53 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:14.478 14:31:53 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90217' 00:21:14.478 14:31:53 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90217 00:21:14.478 14:31:53 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90217 00:21:17.021 14:31:55 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:21:17.021 14:31:55 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90217 ']' 00:21:17.021 14:31:55 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90217 00:21:17.021 14:31:55 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90217 ']' 00:21:17.021 14:31:55 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90217 00:21:17.021 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90217) - No such process 00:21:17.021 Process with pid 90217 is not found 00:21:17.021 14:31:55 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90217 is not found' 00:21:17.021 14:31:55 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:21:17.021 14:31:55 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:21:17.021 14:31:55 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:21:17.021 14:31:55 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:21:17.021 ************************************ 00:21:17.021 END TEST spdkcli_raid 00:21:17.021 ************************************ 00:21:17.021 00:21:17.021 real 0m10.360s 00:21:17.021 user 0m21.614s 00:21:17.021 sys 0m1.129s 00:21:17.021 14:31:55 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:17.021 14:31:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:17.021 14:31:55 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:21:17.021 14:31:55 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:17.021 14:31:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:17.021 14:31:55 -- common/autotest_common.sh@10 -- # set +x 00:21:17.021 ************************************ 00:21:17.021 START TEST blockdev_raid5f 00:21:17.021 ************************************ 00:21:17.021 14:31:55 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:21:17.021 * Looking for test storage... 00:21:17.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:21:17.021 14:31:55 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:17.021 14:31:55 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:17.021 14:31:55 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:21:17.021 14:31:55 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:17.021 14:31:55 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:17.021 14:31:55 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:17.021 14:31:55 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:17.021 14:31:55 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:21:17.021 14:31:55 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:21:17.021 14:31:55 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:21:17.021 14:31:55 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:21:17.021 14:31:55 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:21:17.021 14:31:55 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:21:17.021 14:31:55 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:21:17.022 14:31:55 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:17.022 14:31:55 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:21:17.022 14:31:55 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:21:17.022 14:31:55 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:17.022 14:31:55 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:17.022 14:31:55 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:21:17.022 14:31:55 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:21:17.022 14:31:55 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:17.022 14:31:55 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:21:17.022 14:31:55 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:21:17.022 14:31:55 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:21:17.022 14:31:55 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:21:17.022 14:31:55 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:17.022 14:31:55 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:21:17.022 14:31:55 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:21:17.022 14:31:55 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:17.022 14:31:55 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:17.022 14:31:55 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:21:17.022 14:31:55 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:17.022 14:31:55 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:17.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.022 --rc genhtml_branch_coverage=1 00:21:17.022 --rc genhtml_function_coverage=1 00:21:17.022 --rc genhtml_legend=1 00:21:17.022 --rc geninfo_all_blocks=1 00:21:17.022 --rc geninfo_unexecuted_blocks=1 00:21:17.022 00:21:17.022 ' 00:21:17.022 14:31:55 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:17.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.022 --rc genhtml_branch_coverage=1 00:21:17.022 --rc genhtml_function_coverage=1 00:21:17.022 --rc genhtml_legend=1 00:21:17.022 --rc geninfo_all_blocks=1 00:21:17.022 --rc geninfo_unexecuted_blocks=1 00:21:17.022 00:21:17.022 ' 00:21:17.022 14:31:55 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:17.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.022 --rc genhtml_branch_coverage=1 00:21:17.022 --rc genhtml_function_coverage=1 00:21:17.022 --rc genhtml_legend=1 00:21:17.022 --rc geninfo_all_blocks=1 00:21:17.022 --rc geninfo_unexecuted_blocks=1 00:21:17.022 00:21:17.022 ' 00:21:17.022 14:31:55 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:17.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.022 --rc genhtml_branch_coverage=1 00:21:17.022 --rc genhtml_function_coverage=1 00:21:17.022 --rc genhtml_legend=1 00:21:17.022 --rc geninfo_all_blocks=1 00:21:17.022 --rc geninfo_unexecuted_blocks=1 00:21:17.022 00:21:17.022 ' 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90493 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:21:17.022 14:31:55 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90493 00:21:17.022 14:31:55 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90493 ']' 00:21:17.022 14:31:55 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.022 14:31:55 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.022 14:31:55 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.022 14:31:55 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.022 14:31:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:17.022 [2024-11-20 14:31:55.849573] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:21:17.022 [2024-11-20 14:31:55.849754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90493 ] 00:21:17.281 [2024-11-20 14:31:56.022209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.281 [2024-11-20 14:31:56.155286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.218 14:31:57 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.218 14:31:57 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:21:18.218 14:31:57 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:21:18.218 14:31:57 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:21:18.218 14:31:57 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:21:18.218 14:31:57 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.218 14:31:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:18.218 Malloc0 00:21:18.218 Malloc1 00:21:18.218 Malloc2 00:21:18.218 14:31:57 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.218 14:31:57 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:21:18.218 14:31:57 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.218 14:31:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:18.218 14:31:57 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.218 14:31:57 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:21:18.218 14:31:57 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:21:18.218 14:31:57 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.218 14:31:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:18.218 14:31:57 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.218 14:31:57 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:21:18.218 14:31:57 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.218 14:31:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:18.477 14:31:57 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.477 14:31:57 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:21:18.477 14:31:57 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.477 14:31:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:18.477 14:31:57 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.477 14:31:57 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:21:18.477 14:31:57 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:21:18.477 14:31:57 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.477 14:31:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:18.477 14:31:57 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:21:18.477 14:31:57 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.477 14:31:57 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:21:18.477 14:31:57 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:21:18.477 14:31:57 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "09225534-3ae2-453c-85fd-633e8bf36f22"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "09225534-3ae2-453c-85fd-633e8bf36f22",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "09225534-3ae2-453c-85fd-633e8bf36f22",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "d36827a7-ec25-4f3b-97ad-b79c58c61cc9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "38ce2e28-a272-4536-88a1-9684de8edcb0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "d4caee74-d476-4ec5-8cd4-39038abda44f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:21:18.477 14:31:57 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:21:18.477 14:31:57 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:21:18.477 14:31:57 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:21:18.477 14:31:57 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90493 00:21:18.477 14:31:57 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90493 ']' 00:21:18.477 14:31:57 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90493 00:21:18.477 14:31:57 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:21:18.477 14:31:57 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:18.477 14:31:57 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90493 00:21:18.477 14:31:57 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:18.477 14:31:57 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:18.477 killing process with pid 90493 00:21:18.477 14:31:57 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90493' 00:21:18.477 14:31:57 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90493 00:21:18.477 14:31:57 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90493 00:21:21.012 14:31:59 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:21.012 14:31:59 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:21:21.012 14:31:59 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:21.012 14:31:59 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:21.012 14:31:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:21.012 ************************************ 00:21:21.012 START TEST bdev_hello_world 00:21:21.012 ************************************ 00:21:21.012 14:31:59 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:21:21.271 [2024-11-20 14:32:00.024026] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:21:21.271 [2024-11-20 14:32:00.024241] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90569 ] 00:21:21.271 [2024-11-20 14:32:00.210887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.528 [2024-11-20 14:32:00.344669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.095 [2024-11-20 14:32:00.885816] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:21:22.095 [2024-11-20 14:32:00.885876] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:21:22.095 [2024-11-20 14:32:00.885902] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:21:22.095 [2024-11-20 14:32:00.886520] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:21:22.095 [2024-11-20 14:32:00.886712] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:21:22.095 [2024-11-20 14:32:00.886741] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:21:22.095 [2024-11-20 14:32:00.886813] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:21:22.095 00:21:22.095 [2024-11-20 14:32:00.886843] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:21:23.470 00:21:23.470 real 0m2.301s 00:21:23.470 user 0m1.862s 00:21:23.470 sys 0m0.313s 00:21:23.470 14:32:02 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.470 14:32:02 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:21:23.470 ************************************ 00:21:23.470 END TEST bdev_hello_world 00:21:23.470 ************************************ 00:21:23.470 14:32:02 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:21:23.470 14:32:02 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:23.470 14:32:02 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:23.470 14:32:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:23.470 ************************************ 00:21:23.470 START TEST bdev_bounds 00:21:23.470 ************************************ 00:21:23.470 14:32:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:21:23.470 14:32:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90611 00:21:23.470 14:32:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:21:23.470 Process bdevio pid: 90611 00:21:23.470 14:32:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90611' 00:21:23.470 14:32:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90611 00:21:23.470 14:32:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:23.470 14:32:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90611 ']' 00:21:23.471 14:32:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.471 14:32:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.471 14:32:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.471 14:32:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.471 14:32:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:23.471 [2024-11-20 14:32:02.379920] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:21:23.471 [2024-11-20 14:32:02.380119] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90611 ] 00:21:23.729 [2024-11-20 14:32:02.575392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:23.987 [2024-11-20 14:32:02.719847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.988 [2024-11-20 14:32:02.719937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.988 [2024-11-20 14:32:02.719944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.557 14:32:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:24.557 14:32:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:21:24.557 14:32:03 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:21:24.557 I/O targets: 00:21:24.557 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:21:24.557 00:21:24.557 00:21:24.557 CUnit - A unit testing framework for C - Version 2.1-3 00:21:24.557 http://cunit.sourceforge.net/ 00:21:24.557 00:21:24.557 00:21:24.557 Suite: bdevio tests on: raid5f 00:21:24.557 Test: blockdev write read block ...passed 00:21:24.557 Test: blockdev write zeroes read block ...passed 00:21:24.557 Test: blockdev write zeroes read no split ...passed 00:21:24.816 Test: blockdev write zeroes read split ...passed 00:21:24.816 Test: blockdev write zeroes read split partial ...passed 00:21:24.816 Test: blockdev reset ...passed 00:21:24.816 Test: blockdev write read 8 blocks ...passed 00:21:24.816 Test: blockdev write read size > 128k ...passed 00:21:24.816 Test: blockdev write read invalid size ...passed 00:21:24.816 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:24.816 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:24.816 Test: blockdev write read max offset ...passed 00:21:24.816 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:24.816 Test: blockdev writev readv 8 blocks ...passed 00:21:24.816 Test: blockdev writev readv 30 x 1block ...passed 00:21:24.816 Test: blockdev writev readv block ...passed 00:21:24.816 Test: blockdev writev readv size > 128k ...passed 00:21:24.816 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:24.816 Test: blockdev comparev and writev ...passed 00:21:24.816 Test: blockdev nvme passthru rw ...passed 00:21:24.816 Test: blockdev nvme passthru vendor specific ...passed 00:21:24.816 Test: blockdev nvme admin passthru ...passed 00:21:24.816 Test: blockdev copy ...passed 00:21:24.816 00:21:24.816 Run Summary: Type Total Ran Passed Failed Inactive 00:21:24.816 suites 1 1 n/a 0 0 00:21:24.816 tests 23 23 23 0 0 00:21:24.816 asserts 130 130 130 0 n/a 00:21:24.816 00:21:24.816 Elapsed time = 0.527 seconds 00:21:24.816 0 00:21:24.816 14:32:03 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90611 00:21:24.816 14:32:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90611 ']' 00:21:24.816 14:32:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90611 00:21:24.816 14:32:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:21:24.816 14:32:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:24.816 14:32:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90611 00:21:24.816 14:32:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:24.816 14:32:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:24.816 killing process with pid 90611 00:21:24.816 14:32:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90611' 00:21:24.816 14:32:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90611 00:21:24.816 14:32:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90611 00:21:26.220 14:32:05 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:21:26.220 00:21:26.220 real 0m2.838s 00:21:26.220 user 0m7.028s 00:21:26.220 sys 0m0.445s 00:21:26.220 14:32:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:26.220 ************************************ 00:21:26.220 END TEST bdev_bounds 00:21:26.221 14:32:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:26.221 ************************************ 00:21:26.221 14:32:05 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:21:26.221 14:32:05 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:26.221 14:32:05 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:26.221 14:32:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:26.221 ************************************ 00:21:26.221 START TEST bdev_nbd 00:21:26.221 ************************************ 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90672 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90672 /var/tmp/spdk-nbd.sock 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90672 ']' 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:26.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:26.221 14:32:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:26.479 [2024-11-20 14:32:05.312381] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:21:26.479 [2024-11-20 14:32:05.313163] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.737 [2024-11-20 14:32:05.493439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.737 [2024-11-20 14:32:05.623570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.304 14:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.304 14:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:21:27.304 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:21:27.304 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:27.304 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:21:27.304 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:21:27.304 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:21:27.304 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:27.304 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:21:27.304 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:21:27.304 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:21:27.304 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:21:27.304 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:21:27.304 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:21:27.304 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:21:27.871 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:21:27.871 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:21:27.872 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:21:27.872 14:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:27.872 14:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:27.872 14:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:27.872 14:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:27.872 14:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:27.872 14:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:27.872 14:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:27.872 14:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:27.872 14:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:27.872 1+0 records in 00:21:27.872 1+0 records out 00:21:27.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036625 s, 11.2 MB/s 00:21:27.872 14:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:27.872 14:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:27.872 14:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:27.872 14:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:27.872 14:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:27.872 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:27.872 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:21:27.872 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:28.130 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:21:28.130 { 00:21:28.130 "nbd_device": "/dev/nbd0", 00:21:28.130 "bdev_name": "raid5f" 00:21:28.130 } 00:21:28.130 ]' 00:21:28.130 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:21:28.130 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:21:28.130 { 00:21:28.130 "nbd_device": "/dev/nbd0", 00:21:28.130 "bdev_name": "raid5f" 00:21:28.130 } 00:21:28.130 ]' 00:21:28.130 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:21:28.130 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:28.130 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:28.130 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:28.130 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:28.130 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:28.130 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:28.131 14:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:28.388 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:28.388 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:28.388 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:28.388 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:28.388 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:28.388 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:28.388 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:28.388 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:28.388 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:28.388 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:28.388 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:28.646 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:21:28.647 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:28.647 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:28.647 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:21:28.905 /dev/nbd0 00:21:28.905 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:28.905 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:28.905 14:32:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:28.905 14:32:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:28.905 14:32:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:28.905 14:32:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:28.905 14:32:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:28.905 14:32:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:28.905 14:32:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:28.905 14:32:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:28.905 14:32:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:28.905 1+0 records in 00:21:28.905 1+0 records out 00:21:28.905 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383191 s, 10.7 MB/s 00:21:28.905 14:32:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:28.905 14:32:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:28.905 14:32:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:28.905 14:32:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:28.905 14:32:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:28.905 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:28.905 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:28.905 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:28.905 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:28.905 14:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:29.471 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:29.471 { 00:21:29.471 "nbd_device": "/dev/nbd0", 00:21:29.471 "bdev_name": "raid5f" 00:21:29.471 } 00:21:29.471 ]' 00:21:29.471 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:29.471 { 00:21:29.471 "nbd_device": "/dev/nbd0", 00:21:29.471 "bdev_name": "raid5f" 00:21:29.471 } 00:21:29.471 ]' 00:21:29.471 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:29.471 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:21:29.471 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:21:29.471 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:29.471 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:21:29.471 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:21:29.471 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:21:29.471 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:21:29.471 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:21:29.471 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:21:29.471 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:29.471 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:29.471 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:29.471 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:29.471 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:21:29.471 256+0 records in 00:21:29.471 256+0 records out 00:21:29.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00669547 s, 157 MB/s 00:21:29.471 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:29.471 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:29.471 256+0 records in 00:21:29.472 256+0 records out 00:21:29.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0352239 s, 29.8 MB/s 00:21:29.472 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:21:29.472 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:21:29.472 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:29.472 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:29.472 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:29.472 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:29.472 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:29.472 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:29.472 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:21:29.472 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:29.472 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:29.472 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:29.472 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:29.472 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:29.472 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:29.472 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:29.472 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:29.729 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:29.729 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:29.729 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:29.729 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:29.729 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:29.729 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:29.729 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:29.729 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:29.729 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:29.729 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:29.729 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:29.987 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:29.987 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:29.987 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:29.987 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:29.987 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:29.987 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:29.987 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:29.987 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:29.987 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:29.987 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:21:29.987 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:29.987 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:21:29.987 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:29.987 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:29.987 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:21:29.987 14:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:21:30.246 malloc_lvol_verify 00:21:30.246 14:32:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:21:30.524 e059fa0c-1dec-4bac-a07b-76af73af35d8 00:21:30.524 14:32:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:21:30.781 d4a15794-c244-4a35-9c8c-bc52428b0fac 00:21:30.781 14:32:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:21:31.039 /dev/nbd0 00:21:31.039 14:32:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:21:31.039 14:32:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:21:31.039 14:32:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:21:31.039 14:32:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:21:31.039 14:32:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:21:31.039 mke2fs 1.47.0 (5-Feb-2023) 00:21:31.039 Discarding device blocks: 0/4096 done 00:21:31.039 Creating filesystem with 4096 1k blocks and 1024 inodes 00:21:31.039 00:21:31.039 Allocating group tables: 0/1 done 00:21:31.297 Writing inode tables: 0/1 done 00:21:31.297 Creating journal (1024 blocks): done 00:21:31.297 Writing superblocks and filesystem accounting information: 0/1 done 00:21:31.297 00:21:31.297 14:32:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:31.297 14:32:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:31.297 14:32:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:31.297 14:32:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:31.297 14:32:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:31.297 14:32:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:31.298 14:32:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:31.556 14:32:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:31.556 14:32:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:31.556 14:32:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:31.556 14:32:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:31.556 14:32:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:31.556 14:32:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:31.556 14:32:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:31.556 14:32:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:31.556 14:32:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90672 00:21:31.556 14:32:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90672 ']' 00:21:31.556 14:32:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90672 00:21:31.556 14:32:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:21:31.556 14:32:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.556 14:32:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90672 00:21:31.556 14:32:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:31.556 14:32:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:31.556 killing process with pid 90672 00:21:31.556 14:32:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90672' 00:21:31.556 14:32:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90672 00:21:31.556 14:32:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90672 00:21:32.934 14:32:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:21:32.934 00:21:32.934 real 0m6.565s 00:21:32.934 user 0m9.536s 00:21:32.934 sys 0m1.339s 00:21:32.934 14:32:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:32.934 14:32:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:32.934 ************************************ 00:21:32.934 END TEST bdev_nbd 00:21:32.934 ************************************ 00:21:32.934 14:32:11 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:21:32.934 14:32:11 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:21:32.934 14:32:11 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:21:32.934 14:32:11 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:21:32.934 14:32:11 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:32.934 14:32:11 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:32.934 14:32:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:32.934 ************************************ 00:21:32.934 START TEST bdev_fio 00:21:32.934 ************************************ 00:21:32.934 14:32:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:21:32.934 14:32:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:21:32.934 14:32:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:21:32.934 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:21:32.934 14:32:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:21:32.934 14:32:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:21:32.934 14:32:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:21:32.934 14:32:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:21:32.934 14:32:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:21:32.934 14:32:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:32.934 14:32:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:21:32.934 14:32:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:21:32.934 14:32:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:32.934 14:32:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:32.934 14:32:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:32.934 14:32:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:21:32.934 14:32:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:32.935 ************************************ 00:21:32.935 START TEST bdev_fio_rw_verify 00:21:32.935 ************************************ 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:32.935 14:32:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:21:33.193 14:32:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:33.193 14:32:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:33.193 14:32:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:21:33.193 14:32:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:33.193 14:32:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:33.452 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:33.452 fio-3.35 00:21:33.452 Starting 1 thread 00:21:45.665 00:21:45.665 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90876: Wed Nov 20 14:32:23 2024 00:21:45.665 read: IOPS=8345, BW=32.6MiB/s (34.2MB/s)(326MiB/10001msec) 00:21:45.665 slat (usec): min=23, max=631, avg=29.36, stdev= 6.81 00:21:45.665 clat (usec): min=13, max=897, avg=188.41, stdev=70.99 00:21:45.665 lat (usec): min=41, max=926, avg=217.77, stdev=72.15 00:21:45.665 clat percentiles (usec): 00:21:45.665 | 50.000th=[ 192], 99.000th=[ 330], 99.900th=[ 453], 99.990th=[ 652], 00:21:45.665 | 99.999th=[ 898] 00:21:45.665 write: IOPS=8738, BW=34.1MiB/s (35.8MB/s)(337MiB/9868msec); 0 zone resets 00:21:45.665 slat (usec): min=12, max=283, avg=24.83, stdev= 7.05 00:21:45.665 clat (usec): min=85, max=1588, avg=438.50, stdev=71.40 00:21:45.665 lat (usec): min=123, max=1613, avg=463.32, stdev=73.65 00:21:45.665 clat percentiles (usec): 00:21:45.665 | 50.000th=[ 441], 99.000th=[ 652], 99.900th=[ 1106], 99.990th=[ 1254], 00:21:45.665 | 99.999th=[ 1582] 00:21:45.665 bw ( KiB/s): min=31112, max=37136, per=99.22%, avg=34680.84, stdev=1680.81, samples=19 00:21:45.665 iops : min= 7778, max= 9284, avg=8670.21, stdev=420.20, samples=19 00:21:45.665 lat (usec) : 20=0.01%, 100=5.86%, 250=31.52%, 500=57.39%, 750=4.96% 00:21:45.665 lat (usec) : 1000=0.15% 00:21:45.665 lat (msec) : 2=0.11% 00:21:45.665 cpu : usr=98.11%, sys=0.64%, ctx=45, majf=0, minf=7258 00:21:45.665 IO depths : 1=7.8%, 2=20.0%, 4=55.0%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:45.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:45.665 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:45.665 issued rwts: total=83463,86231,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:45.665 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:45.665 00:21:45.665 Run status group 0 (all jobs): 00:21:45.665 READ: bw=32.6MiB/s (34.2MB/s), 32.6MiB/s-32.6MiB/s (34.2MB/s-34.2MB/s), io=326MiB (342MB), run=10001-10001msec 00:21:45.665 WRITE: bw=34.1MiB/s (35.8MB/s), 34.1MiB/s-34.1MiB/s (35.8MB/s-35.8MB/s), io=337MiB (353MB), run=9868-9868msec 00:21:45.922 ----------------------------------------------------- 00:21:45.922 Suppressions used: 00:21:45.922 count bytes template 00:21:45.922 1 7 /usr/src/fio/parse.c 00:21:45.922 188 18048 /usr/src/fio/iolog.c 00:21:45.922 1 8 libtcmalloc_minimal.so 00:21:45.922 1 904 libcrypto.so 00:21:45.922 ----------------------------------------------------- 00:21:45.922 00:21:45.922 00:21:45.922 real 0m12.860s 00:21:45.922 user 0m13.099s 00:21:45.922 sys 0m0.699s 00:21:45.922 14:32:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:45.922 14:32:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:21:45.922 ************************************ 00:21:45.922 END TEST bdev_fio_rw_verify 00:21:45.922 ************************************ 00:21:45.922 14:32:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:21:45.922 14:32:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:45.922 14:32:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:21:45.922 14:32:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:45.922 14:32:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:21:45.922 14:32:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:21:45.922 14:32:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:45.922 14:32:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:45.922 14:32:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:45.922 14:32:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:21:45.922 14:32:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:45.922 14:32:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:45.922 14:32:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:45.922 14:32:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:21:45.922 14:32:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:21:45.922 14:32:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:21:45.923 14:32:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "09225534-3ae2-453c-85fd-633e8bf36f22"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "09225534-3ae2-453c-85fd-633e8bf36f22",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "09225534-3ae2-453c-85fd-633e8bf36f22",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "d36827a7-ec25-4f3b-97ad-b79c58c61cc9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "38ce2e28-a272-4536-88a1-9684de8edcb0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "d4caee74-d476-4ec5-8cd4-39038abda44f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:21:45.923 14:32:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:21:45.923 14:32:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:21:45.923 14:32:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:45.923 /home/vagrant/spdk_repo/spdk 00:21:45.923 14:32:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:21:45.923 14:32:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:21:45.923 ************************************ 00:21:45.923 END TEST bdev_fio 00:21:45.923 ************************************ 00:21:45.923 14:32:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:21:45.923 00:21:45.923 real 0m13.082s 00:21:45.923 user 0m13.200s 00:21:45.923 sys 0m0.797s 00:21:45.923 14:32:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:45.923 14:32:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:46.180 14:32:24 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:46.180 14:32:24 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:46.180 14:32:24 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:46.180 14:32:24 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:46.180 14:32:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:46.180 ************************************ 00:21:46.180 START TEST bdev_verify 00:21:46.180 ************************************ 00:21:46.180 14:32:24 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:46.180 [2024-11-20 14:32:25.025092] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:21:46.180 [2024-11-20 14:32:25.025266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91040 ] 00:21:46.437 [2024-11-20 14:32:25.255203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:46.437 [2024-11-20 14:32:25.399428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.437 [2024-11-20 14:32:25.399442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.001 Running I/O for 5 seconds... 00:21:49.309 12983.00 IOPS, 50.71 MiB/s [2024-11-20T14:32:29.226Z] 13332.00 IOPS, 52.08 MiB/s [2024-11-20T14:32:30.158Z] 13141.67 IOPS, 51.33 MiB/s [2024-11-20T14:32:31.092Z] 13274.50 IOPS, 51.85 MiB/s [2024-11-20T14:32:31.092Z] 13218.60 IOPS, 51.64 MiB/s 00:21:52.110 Latency(us) 00:21:52.110 [2024-11-20T14:32:31.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.110 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:52.110 Verification LBA range: start 0x0 length 0x2000 00:21:52.110 raid5f : 5.01 6701.01 26.18 0.00 0.00 28726.99 256.93 23116.33 00:21:52.110 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:52.110 Verification LBA range: start 0x2000 length 0x2000 00:21:52.110 raid5f : 5.02 6504.98 25.41 0.00 0.00 29590.71 128.47 27644.28 00:21:52.110 [2024-11-20T14:32:31.092Z] =================================================================================================================== 00:21:52.110 [2024-11-20T14:32:31.092Z] Total : 13205.99 51.59 0.00 0.00 29152.90 128.47 27644.28 00:21:53.486 00:21:53.486 real 0m7.347s 00:21:53.486 user 0m13.418s 00:21:53.486 sys 0m0.324s 00:21:53.486 14:32:32 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.486 14:32:32 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:53.486 ************************************ 00:21:53.486 END TEST bdev_verify 00:21:53.486 ************************************ 00:21:53.486 14:32:32 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:53.486 14:32:32 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:53.486 14:32:32 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.486 14:32:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:53.486 ************************************ 00:21:53.486 START TEST bdev_verify_big_io 00:21:53.486 ************************************ 00:21:53.486 14:32:32 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:53.486 [2024-11-20 14:32:32.436039] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:21:53.486 [2024-11-20 14:32:32.436239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91133 ] 00:21:53.744 [2024-11-20 14:32:32.613657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:54.002 [2024-11-20 14:32:32.740538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.002 [2024-11-20 14:32:32.740550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.568 Running I/O for 5 seconds... 00:21:56.451 630.00 IOPS, 39.38 MiB/s [2024-11-20T14:32:36.371Z] 759.00 IOPS, 47.44 MiB/s [2024-11-20T14:32:37.772Z] 761.33 IOPS, 47.58 MiB/s [2024-11-20T14:32:38.714Z] 761.50 IOPS, 47.59 MiB/s [2024-11-20T14:32:38.714Z] 761.60 IOPS, 47.60 MiB/s 00:21:59.732 Latency(us) 00:21:59.732 [2024-11-20T14:32:38.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.732 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:59.732 Verification LBA range: start 0x0 length 0x200 00:21:59.733 raid5f : 5.18 392.59 24.54 0.00 0.00 8062101.32 232.73 348889.83 00:21:59.733 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:59.733 Verification LBA range: start 0x200 length 0x200 00:21:59.733 raid5f : 5.21 389.79 24.36 0.00 0.00 8149598.81 200.15 350796.33 00:21:59.733 [2024-11-20T14:32:38.715Z] =================================================================================================================== 00:21:59.733 [2024-11-20T14:32:38.715Z] Total : 782.38 48.90 0.00 0.00 8105850.06 200.15 350796.33 00:22:01.107 00:22:01.107 real 0m7.519s 00:22:01.107 user 0m13.835s 00:22:01.107 sys 0m0.306s 00:22:01.107 14:32:39 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:01.107 ************************************ 00:22:01.107 END TEST bdev_verify_big_io 00:22:01.107 ************************************ 00:22:01.107 14:32:39 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:22:01.107 14:32:39 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:01.107 14:32:39 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:22:01.107 14:32:39 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:01.107 14:32:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:01.107 ************************************ 00:22:01.107 START TEST bdev_write_zeroes 00:22:01.107 ************************************ 00:22:01.107 14:32:39 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:01.107 [2024-11-20 14:32:39.974852] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:22:01.107 [2024-11-20 14:32:39.975025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91232 ] 00:22:01.365 [2024-11-20 14:32:40.159311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.365 [2024-11-20 14:32:40.314439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.932 Running I/O for 1 seconds... 00:22:03.306 19215.00 IOPS, 75.06 MiB/s 00:22:03.306 Latency(us) 00:22:03.306 [2024-11-20T14:32:42.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.306 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:03.306 raid5f : 1.01 19199.50 75.00 0.00 0.00 6639.67 1980.97 11856.06 00:22:03.306 [2024-11-20T14:32:42.288Z] =================================================================================================================== 00:22:03.306 [2024-11-20T14:32:42.288Z] Total : 19199.50 75.00 0.00 0.00 6639.67 1980.97 11856.06 00:22:04.680 ************************************ 00:22:04.680 END TEST bdev_write_zeroes 00:22:04.680 00:22:04.680 real 0m3.370s 00:22:04.680 user 0m2.909s 00:22:04.680 sys 0m0.325s 00:22:04.680 14:32:43 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:04.680 14:32:43 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:22:04.680 ************************************ 00:22:04.680 14:32:43 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:04.680 14:32:43 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:22:04.680 14:32:43 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:04.680 14:32:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:04.680 ************************************ 00:22:04.680 START TEST bdev_json_nonenclosed 00:22:04.680 ************************************ 00:22:04.680 14:32:43 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:04.680 [2024-11-20 14:32:43.403663] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:22:04.680 [2024-11-20 14:32:43.403858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91285 ] 00:22:04.680 [2024-11-20 14:32:43.593072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.938 [2024-11-20 14:32:43.756853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.938 [2024-11-20 14:32:43.757023] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:22:04.938 [2024-11-20 14:32:43.757070] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:04.938 [2024-11-20 14:32:43.757087] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:05.196 ************************************ 00:22:05.196 END TEST bdev_json_nonenclosed 00:22:05.196 ************************************ 00:22:05.196 00:22:05.196 real 0m0.749s 00:22:05.196 user 0m0.494s 00:22:05.196 sys 0m0.148s 00:22:05.196 14:32:44 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:05.196 14:32:44 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:22:05.197 14:32:44 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:05.197 14:32:44 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:22:05.197 14:32:44 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:05.197 14:32:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:05.197 ************************************ 00:22:05.197 START TEST bdev_json_nonarray 00:22:05.197 ************************************ 00:22:05.197 14:32:44 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:05.455 [2024-11-20 14:32:44.186847] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:22:05.455 [2024-11-20 14:32:44.187023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91315 ] 00:22:05.455 [2024-11-20 14:32:44.363019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.713 [2024-11-20 14:32:44.509217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.713 [2024-11-20 14:32:44.509374] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:22:05.713 [2024-11-20 14:32:44.509407] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:05.713 [2024-11-20 14:32:44.509438] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:05.971 ************************************ 00:22:05.971 END TEST bdev_json_nonarray 00:22:05.971 ************************************ 00:22:05.971 00:22:05.971 real 0m0.701s 00:22:05.971 user 0m0.454s 00:22:05.971 sys 0m0.140s 00:22:05.971 14:32:44 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:05.971 14:32:44 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:22:05.971 14:32:44 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:22:05.971 14:32:44 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:22:05.971 14:32:44 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:22:05.971 14:32:44 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:22:05.971 14:32:44 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:22:05.971 14:32:44 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:22:05.971 14:32:44 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:05.971 14:32:44 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:22:05.971 14:32:44 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:22:05.971 14:32:44 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:22:05.971 14:32:44 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:22:05.971 00:22:05.971 real 0m49.299s 00:22:05.971 user 1m7.293s 00:22:05.971 sys 0m5.120s 00:22:05.971 14:32:44 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:05.971 ************************************ 00:22:05.971 END TEST blockdev_raid5f 00:22:05.971 ************************************ 00:22:05.971 14:32:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:05.971 14:32:44 -- spdk/autotest.sh@194 -- # uname -s 00:22:05.971 14:32:44 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:22:05.971 14:32:44 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:22:05.971 14:32:44 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:22:05.971 14:32:44 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:22:05.971 14:32:44 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:22:05.971 14:32:44 -- spdk/autotest.sh@260 -- # timing_exit lib 00:22:05.971 14:32:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:05.971 14:32:44 -- common/autotest_common.sh@10 -- # set +x 00:22:05.971 14:32:44 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:22:05.971 14:32:44 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:22:05.971 14:32:44 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:22:05.971 14:32:44 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:05.971 14:32:44 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:05.971 14:32:44 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:22:05.971 14:32:44 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:22:05.971 14:32:44 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:22:05.971 14:32:44 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:05.971 14:32:44 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:22:05.971 14:32:44 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:22:05.971 14:32:44 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:22:05.971 14:32:44 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:22:05.971 14:32:44 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:22:05.971 14:32:44 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:22:05.971 14:32:44 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:22:05.971 14:32:44 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:22:05.971 14:32:44 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:22:05.971 14:32:44 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:22:05.971 14:32:44 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:22:05.971 14:32:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:05.971 14:32:44 -- common/autotest_common.sh@10 -- # set +x 00:22:05.971 14:32:44 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:22:05.971 14:32:44 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:22:05.971 14:32:44 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:22:05.971 14:32:44 -- common/autotest_common.sh@10 -- # set +x 00:22:07.869 INFO: APP EXITING 00:22:07.869 INFO: killing all VMs 00:22:07.869 INFO: killing vhost app 00:22:07.869 INFO: EXIT DONE 00:22:07.869 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:08.127 Waiting for block devices as requested 00:22:08.127 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:08.127 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:09.061 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:09.061 Cleaning 00:22:09.061 Removing: /var/run/dpdk/spdk0/config 00:22:09.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:09.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:09.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:09.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:09.061 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:09.061 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:09.061 Removing: /dev/shm/spdk_tgt_trace.pid56879 00:22:09.061 Removing: /var/run/dpdk/spdk0 00:22:09.061 Removing: /var/run/dpdk/spdk_pid56649 00:22:09.061 Removing: /var/run/dpdk/spdk_pid56879 00:22:09.061 Removing: /var/run/dpdk/spdk_pid57112 00:22:09.061 Removing: /var/run/dpdk/spdk_pid57217 00:22:09.061 Removing: /var/run/dpdk/spdk_pid57268 00:22:09.061 Removing: /var/run/dpdk/spdk_pid57407 00:22:09.061 Removing: /var/run/dpdk/spdk_pid57425 00:22:09.061 Removing: /var/run/dpdk/spdk_pid57635 00:22:09.061 Removing: /var/run/dpdk/spdk_pid57740 00:22:09.061 Removing: /var/run/dpdk/spdk_pid57847 00:22:09.061 Removing: /var/run/dpdk/spdk_pid57969 00:22:09.061 Removing: /var/run/dpdk/spdk_pid58077 00:22:09.061 Removing: /var/run/dpdk/spdk_pid58122 00:22:09.061 Removing: /var/run/dpdk/spdk_pid58159 00:22:09.061 Removing: /var/run/dpdk/spdk_pid58229 00:22:09.061 Removing: /var/run/dpdk/spdk_pid58324 00:22:09.061 Removing: /var/run/dpdk/spdk_pid58801 00:22:09.061 Removing: /var/run/dpdk/spdk_pid58876 00:22:09.061 Removing: /var/run/dpdk/spdk_pid58951 00:22:09.061 Removing: /var/run/dpdk/spdk_pid58968 00:22:09.061 Removing: /var/run/dpdk/spdk_pid59117 00:22:09.061 Removing: /var/run/dpdk/spdk_pid59139 00:22:09.061 Removing: /var/run/dpdk/spdk_pid59284 00:22:09.061 Removing: /var/run/dpdk/spdk_pid59306 00:22:09.061 Removing: /var/run/dpdk/spdk_pid59375 00:22:09.061 Removing: /var/run/dpdk/spdk_pid59400 00:22:09.061 Removing: /var/run/dpdk/spdk_pid59464 00:22:09.061 Removing: /var/run/dpdk/spdk_pid59482 00:22:09.061 Removing: /var/run/dpdk/spdk_pid59677 00:22:09.061 Removing: /var/run/dpdk/spdk_pid59719 00:22:09.061 Removing: /var/run/dpdk/spdk_pid59803 00:22:09.061 Removing: /var/run/dpdk/spdk_pid61168 00:22:09.061 Removing: /var/run/dpdk/spdk_pid61379 00:22:09.061 Removing: /var/run/dpdk/spdk_pid61525 00:22:09.061 Removing: /var/run/dpdk/spdk_pid62179 00:22:09.061 Removing: /var/run/dpdk/spdk_pid62391 00:22:09.061 Removing: /var/run/dpdk/spdk_pid62531 00:22:09.061 Removing: /var/run/dpdk/spdk_pid63191 00:22:09.061 Removing: /var/run/dpdk/spdk_pid63521 00:22:09.061 Removing: /var/run/dpdk/spdk_pid63674 00:22:09.061 Removing: /var/run/dpdk/spdk_pid65087 00:22:09.061 Removing: /var/run/dpdk/spdk_pid65345 00:22:09.061 Removing: /var/run/dpdk/spdk_pid65491 00:22:09.061 Removing: /var/run/dpdk/spdk_pid66904 00:22:09.061 Removing: /var/run/dpdk/spdk_pid67168 00:22:09.061 Removing: /var/run/dpdk/spdk_pid67308 00:22:09.061 Removing: /var/run/dpdk/spdk_pid68729 00:22:09.061 Removing: /var/run/dpdk/spdk_pid69180 00:22:09.061 Removing: /var/run/dpdk/spdk_pid69326 00:22:09.061 Removing: /var/run/dpdk/spdk_pid70842 00:22:09.061 Removing: /var/run/dpdk/spdk_pid71104 00:22:09.061 Removing: /var/run/dpdk/spdk_pid71250 00:22:09.061 Removing: /var/run/dpdk/spdk_pid72754 00:22:09.061 Removing: /var/run/dpdk/spdk_pid73024 00:22:09.061 Removing: /var/run/dpdk/spdk_pid73170 00:22:09.061 Removing: /var/run/dpdk/spdk_pid74693 00:22:09.061 Removing: /var/run/dpdk/spdk_pid75190 00:22:09.061 Removing: /var/run/dpdk/spdk_pid75337 00:22:09.061 Removing: /var/run/dpdk/spdk_pid75481 00:22:09.061 Removing: /var/run/dpdk/spdk_pid75933 00:22:09.061 Removing: /var/run/dpdk/spdk_pid76700 00:22:09.061 Removing: /var/run/dpdk/spdk_pid77097 00:22:09.061 Removing: /var/run/dpdk/spdk_pid77816 00:22:09.061 Removing: /var/run/dpdk/spdk_pid78302 00:22:09.062 Removing: /var/run/dpdk/spdk_pid79095 00:22:09.062 Removing: /var/run/dpdk/spdk_pid79515 00:22:09.062 Removing: /var/run/dpdk/spdk_pid81533 00:22:09.062 Removing: /var/run/dpdk/spdk_pid81985 00:22:09.062 Removing: /var/run/dpdk/spdk_pid82436 00:22:09.062 Removing: /var/run/dpdk/spdk_pid84553 00:22:09.062 Removing: /var/run/dpdk/spdk_pid85051 00:22:09.062 Removing: /var/run/dpdk/spdk_pid85561 00:22:09.062 Removing: /var/run/dpdk/spdk_pid86636 00:22:09.062 Removing: /var/run/dpdk/spdk_pid86964 00:22:09.062 Removing: /var/run/dpdk/spdk_pid87923 00:22:09.062 Removing: /var/run/dpdk/spdk_pid88257 00:22:09.062 Removing: /var/run/dpdk/spdk_pid89206 00:22:09.062 Removing: /var/run/dpdk/spdk_pid89535 00:22:09.062 Removing: /var/run/dpdk/spdk_pid90217 00:22:09.062 Removing: /var/run/dpdk/spdk_pid90493 00:22:09.062 Removing: /var/run/dpdk/spdk_pid90569 00:22:09.062 Removing: /var/run/dpdk/spdk_pid90611 00:22:09.062 Removing: /var/run/dpdk/spdk_pid90861 00:22:09.062 Removing: /var/run/dpdk/spdk_pid91040 00:22:09.062 Removing: /var/run/dpdk/spdk_pid91133 00:22:09.062 Removing: /var/run/dpdk/spdk_pid91232 00:22:09.062 Removing: /var/run/dpdk/spdk_pid91285 00:22:09.062 Removing: /var/run/dpdk/spdk_pid91315 00:22:09.062 Clean 00:22:09.320 14:32:48 -- common/autotest_common.sh@1453 -- # return 0 00:22:09.320 14:32:48 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:22:09.320 14:32:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:09.320 14:32:48 -- common/autotest_common.sh@10 -- # set +x 00:22:09.320 14:32:48 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:22:09.320 14:32:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:09.320 14:32:48 -- common/autotest_common.sh@10 -- # set +x 00:22:09.320 14:32:48 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:09.320 14:32:48 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:09.320 14:32:48 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:09.320 14:32:48 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:22:09.320 14:32:48 -- spdk/autotest.sh@398 -- # hostname 00:22:09.320 14:32:48 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:09.579 geninfo: WARNING: invalid characters removed from testname! 00:22:36.124 14:33:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:39.407 14:33:18 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:42.688 14:33:20 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:45.261 14:33:23 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:47.793 14:33:26 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:50.326 14:33:28 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:52.857 14:33:31 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:52.857 14:33:31 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:52.857 14:33:31 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:52.857 14:33:31 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:52.857 14:33:31 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:52.857 14:33:31 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:52.857 + [[ -n 5200 ]] 00:22:52.857 + sudo kill 5200 00:22:52.867 [Pipeline] } 00:22:52.884 [Pipeline] // timeout 00:22:52.889 [Pipeline] } 00:22:52.903 [Pipeline] // stage 00:22:52.908 [Pipeline] } 00:22:52.925 [Pipeline] // catchError 00:22:52.935 [Pipeline] stage 00:22:52.937 [Pipeline] { (Stop VM) 00:22:52.950 [Pipeline] sh 00:22:53.228 + vagrant halt 00:22:57.416 ==> default: Halting domain... 00:23:02.766 [Pipeline] sh 00:23:03.046 + vagrant destroy -f 00:23:07.238 ==> default: Removing domain... 00:23:07.256 [Pipeline] sh 00:23:07.534 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:23:07.542 [Pipeline] } 00:23:07.556 [Pipeline] // stage 00:23:07.561 [Pipeline] } 00:23:07.576 [Pipeline] // dir 00:23:07.582 [Pipeline] } 00:23:07.596 [Pipeline] // wrap 00:23:07.602 [Pipeline] } 00:23:07.615 [Pipeline] // catchError 00:23:07.624 [Pipeline] stage 00:23:07.626 [Pipeline] { (Epilogue) 00:23:07.639 [Pipeline] sh 00:23:07.918 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:14.576 [Pipeline] catchError 00:23:14.578 [Pipeline] { 00:23:14.590 [Pipeline] sh 00:23:14.869 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:14.869 Artifacts sizes are good 00:23:14.877 [Pipeline] } 00:23:14.890 [Pipeline] // catchError 00:23:14.901 [Pipeline] archiveArtifacts 00:23:14.908 Archiving artifacts 00:23:15.008 [Pipeline] cleanWs 00:23:15.019 [WS-CLEANUP] Deleting project workspace... 00:23:15.019 [WS-CLEANUP] Deferred wipeout is used... 00:23:15.025 [WS-CLEANUP] done 00:23:15.026 [Pipeline] } 00:23:15.043 [Pipeline] // stage 00:23:15.048 [Pipeline] } 00:23:15.061 [Pipeline] // node 00:23:15.067 [Pipeline] End of Pipeline 00:23:15.108 Finished: SUCCESS